00:00:00.001 Started by upstream project "autotest-per-patch" build number 132343 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.113 Using shallow fetch with depth 1 00:00:00.113 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.113 > git --version # timeout=10 00:00:00.140 > git --version # 'git version 2.39.2' 00:00:00.140 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.160 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.160 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.405 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.418 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.430 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.430 > git config core.sparsecheckout # timeout=10 00:00:02.441 > git read-tree -mu HEAD # timeout=10 00:00:02.457 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.482 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.483 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.575 [Pipeline] Start of Pipeline 00:00:02.588 [Pipeline] library 00:00:02.590 Loading library shm_lib@master 00:00:02.590 Library shm_lib@master is cached. Copying from home. 00:00:02.609 [Pipeline] node 00:00:02.639 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:02.640 [Pipeline] { 00:00:02.648 [Pipeline] catchError 00:00:02.650 [Pipeline] { 00:00:02.660 [Pipeline] wrap 00:00:02.666 [Pipeline] { 00:00:02.673 [Pipeline] stage 00:00:02.675 [Pipeline] { (Prologue) 00:00:02.692 [Pipeline] echo 00:00:02.693 Node: VM-host-SM38 00:00:02.697 [Pipeline] cleanWs 00:00:02.704 [WS-CLEANUP] Deleting project workspace... 00:00:02.704 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.710 [WS-CLEANUP] done 00:00:02.903 [Pipeline] setCustomBuildProperty 00:00:02.973 [Pipeline] httpRequest 00:00:03.299 [Pipeline] echo 00:00:03.300 Sorcerer 10.211.164.20 is alive 00:00:03.306 [Pipeline] retry 00:00:03.308 [Pipeline] { 00:00:03.319 [Pipeline] httpRequest 00:00:03.322 HttpMethod: GET 00:00:03.323 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.323 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.324 Response Code: HTTP/1.1 200 OK 00:00:03.324 Success: Status code 200 is in the accepted range: 200,404 00:00:03.325 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.505 [Pipeline] } 00:00:03.519 [Pipeline] // retry 00:00:03.526 [Pipeline] sh 00:00:03.800 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.835 [Pipeline] httpRequest 00:00:04.300 [Pipeline] echo 00:00:04.302 Sorcerer 10.211.164.20 is alive 00:00:04.315 [Pipeline] retry 00:00:04.317 [Pipeline] { 00:00:04.335 [Pipeline] httpRequest 00:00:04.340 HttpMethod: GET 00:00:04.340 URL: http://10.211.164.20/packages/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:04.340 Sending request to url: http://10.211.164.20/packages/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:04.341 Response Code: HTTP/1.1 200 OK 00:00:04.341 Success: Status code 200 is in the accepted range: 200,404 00:00:04.342 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:24.401 [Pipeline] } 00:00:24.420 [Pipeline] // retry 00:00:24.428 [Pipeline] sh 00:00:24.706 + tar --no-same-owner -xf spdk_9b64b1304a1110564887d506b0fb7b0ef65899c9.tar.gz 00:00:27.247 [Pipeline] sh 00:00:27.527 + git -C spdk log --oneline -n5 00:00:27.527 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:00:27.527 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:00:27.527 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:00:27.527 095307e93 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:27.527 3b3a1a596 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:27.546 [Pipeline] writeFile 00:00:27.562 [Pipeline] sh 00:00:27.891 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:27.902 [Pipeline] sh 00:00:28.180 + cat autorun-spdk.conf 00:00:28.180 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.180 SPDK_TEST_NVME=1 00:00:28.180 SPDK_TEST_FTL=1 00:00:28.180 SPDK_TEST_ISAL=1 00:00:28.180 SPDK_RUN_ASAN=1 00:00:28.180 SPDK_RUN_UBSAN=1 00:00:28.180 SPDK_TEST_XNVME=1 00:00:28.180 SPDK_TEST_NVME_FDP=1 00:00:28.180 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.186 RUN_NIGHTLY=0 00:00:28.189 [Pipeline] } 00:00:28.206 [Pipeline] // stage 00:00:28.226 [Pipeline] stage 00:00:28.229 [Pipeline] { (Run VM) 00:00:28.242 [Pipeline] sh 00:00:28.521 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.521 + echo 'Start stage prepare_nvme.sh' 00:00:28.521 Start stage prepare_nvme.sh 00:00:28.521 + [[ -n 1 ]] 00:00:28.521 + disk_prefix=ex1 00:00:28.521 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:28.521 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:28.521 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:28.521 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.521 ++ SPDK_TEST_NVME=1 00:00:28.521 ++ SPDK_TEST_FTL=1 00:00:28.521 ++ SPDK_TEST_ISAL=1 00:00:28.521 ++ SPDK_RUN_ASAN=1 00:00:28.521 ++ SPDK_RUN_UBSAN=1 00:00:28.521 ++ SPDK_TEST_XNVME=1 00:00:28.521 ++ SPDK_TEST_NVME_FDP=1 00:00:28.521 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.521 ++ RUN_NIGHTLY=0 00:00:28.521 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:28.521 + nvme_files=() 00:00:28.521 + declare -A nvme_files 00:00:28.521 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.521 + nvme_files['nvme.img']=5G 00:00:28.521 + nvme_files['nvme-cmb.img']=5G 00:00:28.521 + nvme_files['nvme-multi0.img']=4G 00:00:28.521 + nvme_files['nvme-multi1.img']=4G 00:00:28.521 + nvme_files['nvme-multi2.img']=4G 00:00:28.521 + nvme_files['nvme-openstack.img']=8G 00:00:28.521 + nvme_files['nvme-zns.img']=5G 00:00:28.521 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.521 + (( SPDK_TEST_FTL == 1 )) 00:00:28.521 + nvme_files["nvme-ftl.img"]=6G 00:00:28.521 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.521 + nvme_files["nvme-fdp.img"]=1G 00:00:28.521 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.521 + for nvme in "${!nvme_files[@]}" 00:00:28.521 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:28.521 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.521 + for nvme in "${!nvme_files[@]}" 00:00:28.521 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.088 + for nvme in "${!nvme_files[@]}" 00:00:29.088 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:29.088 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:29.354 + for nvme in "${!nvme_files[@]}" 00:00:29.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:29.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.616 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:29.616 + echo 'End stage prepare_nvme.sh' 00:00:29.616 End stage prepare_nvme.sh 00:00:29.626 [Pipeline] sh 00:00:29.902 + DISTRO=fedora39 00:00:29.902 + CPUS=10 00:00:29.902 + RAM=12288 00:00:29.902 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:29.902 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:29.902 00:00:29.902 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:29.902 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:29.902 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:29.902 HELP=0 00:00:29.902 DRY_RUN=0 00:00:29.902 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:29.902 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:29.902 NVME_AUTO_CREATE=0 00:00:29.902 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:29.902 NVME_CMB=,,,, 00:00:29.902 NVME_PMR=,,,, 00:00:29.902 NVME_ZNS=,,,, 00:00:29.902 NVME_MS=true,,,, 00:00:29.902 NVME_FDP=,,,on, 00:00:29.902 SPDK_VAGRANT_DISTRO=fedora39 00:00:29.902 SPDK_VAGRANT_VMCPU=10 00:00:29.902 SPDK_VAGRANT_VMRAM=12288 00:00:29.902 SPDK_VAGRANT_PROVIDER=libvirt 00:00:29.902 SPDK_VAGRANT_HTTP_PROXY= 00:00:29.902 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:29.902 SPDK_OPENSTACK_NETWORK=0 00:00:29.902 VAGRANT_PACKAGE_BOX=0 00:00:29.902 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:29.902 FORCE_DISTRO=true 00:00:29.902 VAGRANT_BOX_VERSION= 00:00:29.902 EXTRA_VAGRANTFILES= 00:00:29.902 NIC_MODEL=e1000 00:00:29.902 00:00:29.902 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:29.902 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:32.426 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.991 ==> default: Creating image (snapshot of base box volume). 00:00:32.991 ==> default: Creating domain with the following settings... 00:00:32.991 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732082332_e25a14ce53f00173cb8d 00:00:32.991 ==> default: -- Domain type: kvm 00:00:32.991 ==> default: -- Cpus: 10 00:00:32.991 ==> default: -- Feature: acpi 00:00:32.991 ==> default: -- Feature: apic 00:00:32.991 ==> default: -- Feature: pae 00:00:32.991 ==> default: -- Memory: 12288M 00:00:32.991 ==> default: -- Memory Backing: hugepages: 00:00:32.991 ==> default: -- Management MAC: 00:00:32.991 ==> default: -- Loader: 00:00:32.991 ==> default: -- Nvram: 00:00:32.991 ==> default: -- Base box: spdk/fedora39 00:00:32.991 ==> default: -- Storage pool: default 00:00:32.991 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732082332_e25a14ce53f00173cb8d.img (20G) 00:00:32.992 ==> default: -- Volume Cache: default 00:00:32.992 ==> default: -- Kernel: 00:00:32.992 ==> default: -- Initrd: 00:00:32.992 ==> default: -- Graphics Type: vnc 00:00:32.992 ==> default: -- Graphics Port: -1 00:00:32.992 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.992 ==> default: -- Graphics Password: Not defined 00:00:32.992 ==> default: -- Video Type: cirrus 00:00:32.992 ==> default: -- Video VRAM: 9216 00:00:32.992 ==> default: -- Sound Type: 00:00:32.992 ==> default: -- Keymap: en-us 00:00:32.992 ==> default: -- TPM Path: 00:00:32.992 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.992 ==> default: -- Command line args: 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:32.992 ==> default: -> value=-drive, 00:00:32.992 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:32.992 ==> default: -> value=-device, 00:00:32.992 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.992 ==> default: Creating shared folders metadata... 00:00:32.992 ==> default: Starting domain. 00:00:33.925 ==> default: Waiting for domain to get an IP address... 00:00:48.788 ==> default: Waiting for SSH to become available... 00:00:48.788 ==> default: Configuring and enabling network interfaces... 00:00:52.066 default: SSH address: 192.168.121.50:22 00:00:52.066 default: SSH username: vagrant 00:00:52.066 default: SSH auth method: private key 00:00:54.044 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:00.683 ==> default: Mounting SSHFS shared folder... 00:01:02.063 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:02.063 ==> default: Checking Mount.. 00:01:03.002 ==> default: Folder Successfully Mounted! 00:01:03.002 00:01:03.002 SUCCESS! 00:01:03.002 00:01:03.002 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:03.002 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:03.002 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:03.002 00:01:03.013 [Pipeline] } 00:01:03.030 [Pipeline] // stage 00:01:03.040 [Pipeline] dir 00:01:03.041 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:03.043 [Pipeline] { 00:01:03.057 [Pipeline] catchError 00:01:03.059 [Pipeline] { 00:01:03.071 [Pipeline] sh 00:01:03.354 + vagrant ssh-config --host vagrant 00:01:03.354 + sed -ne '/^Host/,$p' 00:01:03.354 + tee ssh_conf 00:01:05.887 Host vagrant 00:01:05.887 HostName 192.168.121.50 00:01:05.887 User vagrant 00:01:05.887 Port 22 00:01:05.887 UserKnownHostsFile /dev/null 00:01:05.887 StrictHostKeyChecking no 00:01:05.887 PasswordAuthentication no 00:01:05.887 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:05.887 IdentitiesOnly yes 00:01:05.887 LogLevel FATAL 00:01:05.887 ForwardAgent yes 00:01:05.887 ForwardX11 yes 00:01:05.887 00:01:05.898 [Pipeline] withEnv 00:01:05.900 [Pipeline] { 00:01:05.913 [Pipeline] sh 00:01:06.191 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:06.191 source /etc/os-release 00:01:06.191 [[ -e /image.version ]] && img=$(< /image.version) 00:01:06.191 # Minimal, systemd-like check. 00:01:06.191 if [[ -e /.dockerenv ]]; then 00:01:06.191 # Clear garbage from the node'\''s name: 00:01:06.191 # agt-er_autotest_547-896 -> autotest_547-896 00:01:06.191 # $HOSTNAME is the actual container id 00:01:06.191 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:06.191 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:06.191 # We can assume this is a mount from a host where container is running, 00:01:06.191 # so fetch its hostname to easily identify the target swarm worker. 00:01:06.191 container="$(< /etc/hostname) ($agent)" 00:01:06.191 else 00:01:06.191 # Fallback 00:01:06.191 container=$agent 00:01:06.191 fi 00:01:06.191 fi 00:01:06.191 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:06.191 ' 00:01:06.199 [Pipeline] } 00:01:06.214 [Pipeline] // withEnv 00:01:06.222 [Pipeline] setCustomBuildProperty 00:01:06.234 [Pipeline] stage 00:01:06.236 [Pipeline] { (Tests) 00:01:06.251 [Pipeline] sh 00:01:06.528 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:06.540 [Pipeline] sh 00:01:06.818 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:06.834 [Pipeline] timeout 00:01:06.835 Timeout set to expire in 50 min 00:01:06.836 [Pipeline] { 00:01:06.849 [Pipeline] sh 00:01:07.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:07.703 HEAD is now at 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:07.717 [Pipeline] sh 00:01:08.000 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:08.272 [Pipeline] sh 00:01:08.556 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:08.834 [Pipeline] sh 00:01:09.118 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:09.379 ++ readlink -f spdk_repo 00:01:09.379 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:09.379 + [[ -n /home/vagrant/spdk_repo ]] 00:01:09.379 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:09.379 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:09.379 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:09.379 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:09.379 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:09.379 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:09.379 + cd /home/vagrant/spdk_repo 00:01:09.379 + source /etc/os-release 00:01:09.379 ++ NAME='Fedora Linux' 00:01:09.379 ++ VERSION='39 (Cloud Edition)' 00:01:09.379 ++ ID=fedora 00:01:09.379 ++ VERSION_ID=39 00:01:09.379 ++ VERSION_CODENAME= 00:01:09.379 ++ PLATFORM_ID=platform:f39 00:01:09.379 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:09.379 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.379 ++ LOGO=fedora-logo-icon 00:01:09.379 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:09.379 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.379 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:09.379 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.379 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.379 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.379 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:09.379 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.379 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:09.379 ++ SUPPORT_END=2024-11-12 00:01:09.379 ++ VARIANT='Cloud Edition' 00:01:09.379 ++ VARIANT_ID=cloud 00:01:09.379 + uname -a 00:01:09.379 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:09.379 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:09.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:09.898 Hugepages 00:01:09.898 node hugesize free / total 00:01:09.898 node0 1048576kB 0 / 0 00:01:09.898 node0 2048kB 0 / 0 00:01:09.898 00:01:09.898 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.898 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:09.898 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:09.898 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:09.898 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:09.898 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:09.898 + rm -f /tmp/spdk-ld-path 00:01:09.898 + source autorun-spdk.conf 00:01:09.898 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.898 ++ SPDK_TEST_NVME=1 00:01:09.898 ++ SPDK_TEST_FTL=1 00:01:09.898 ++ SPDK_TEST_ISAL=1 00:01:09.898 ++ SPDK_RUN_ASAN=1 00:01:09.898 ++ SPDK_RUN_UBSAN=1 00:01:09.898 ++ SPDK_TEST_XNVME=1 00:01:09.898 ++ SPDK_TEST_NVME_FDP=1 00:01:09.898 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.898 ++ RUN_NIGHTLY=0 00:01:09.898 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.898 + [[ -n '' ]] 00:01:09.898 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:09.898 + for M in /var/spdk/build-*-manifest.txt 00:01:09.898 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.898 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.898 + for M in /var/spdk/build-*-manifest.txt 00:01:09.898 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.898 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.898 + for M in /var/spdk/build-*-manifest.txt 00:01:09.898 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.898 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:10.158 ++ uname 00:01:10.158 + [[ Linux == \L\i\n\u\x ]] 00:01:10.158 + sudo dmesg -T 00:01:10.158 + sudo dmesg --clear 00:01:10.158 + dmesg_pid=5027 00:01:10.158 + [[ Fedora Linux == FreeBSD ]] 00:01:10.158 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.158 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.158 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.158 + sudo dmesg -Tw 00:01:10.158 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.158 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.158 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.158 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.158 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.158 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.158 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.158 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.158 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.158 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.158 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.158 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:10.158 05:59:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:10.158 05:59:29 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.158 05:59:29 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:10.158 05:59:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:10.158 05:59:29 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:10.158 05:59:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:10.158 05:59:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:10.158 05:59:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:10.158 05:59:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.158 05:59:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.158 05:59:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.158 05:59:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.158 05:59:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.158 05:59:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.158 05:59:29 -- paths/export.sh@5 -- $ export PATH 00:01:10.158 05:59:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.158 05:59:29 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:10.158 05:59:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:10.158 05:59:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082369.XXXXXX 00:01:10.158 05:59:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082369.BLZZnh 00:01:10.158 05:59:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:10.158 05:59:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:10.158 05:59:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:10.158 05:59:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:10.158 05:59:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.158 05:59:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:10.158 05:59:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:10.158 05:59:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.158 05:59:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:10.158 05:59:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:10.158 05:59:29 -- pm/common@17 -- $ local monitor 00:01:10.158 05:59:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.158 05:59:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.158 05:59:29 -- pm/common@25 -- $ sleep 1 00:01:10.158 05:59:29 -- pm/common@21 -- $ date +%s 00:01:10.158 05:59:29 -- pm/common@21 -- $ date +%s 00:01:10.159 05:59:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732082369 00:01:10.159 05:59:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732082369 00:01:10.159 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732082369_collect-cpu-load.pm.log 00:01:10.159 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732082369_collect-vmstat.pm.log 00:01:11.543 05:59:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:11.543 05:59:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:11.543 05:59:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:11.543 05:59:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:11.543 05:59:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.543 Wed Nov 20 05:59:30 AM UTC 2024 00:01:11.543 05:59:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.543 v25.01-pre-190-g9b64b1304 00:01:11.543 05:59:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:11.543 05:59:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:11.543 05:59:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:11.543 05:59:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:11.543 05:59:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.543 ************************************ 00:01:11.543 START TEST asan 00:01:11.543 ************************************ 00:01:11.543 using asan 00:01:11.543 05:59:30 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:11.543 00:01:11.543 real 0m0.000s 00:01:11.543 user 0m0.000s 00:01:11.543 sys 0m0.000s 00:01:11.543 ************************************ 00:01:11.543 05:59:30 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:11.543 05:59:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.543 END TEST asan 00:01:11.543 ************************************ 00:01:11.543 05:59:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.543 05:59:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.543 05:59:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:11.543 05:59:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:11.543 05:59:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.543 ************************************ 00:01:11.543 START TEST ubsan 00:01:11.543 ************************************ 00:01:11.543 using ubsan 00:01:11.543 05:59:30 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:11.543 00:01:11.543 real 0m0.000s 00:01:11.543 user 0m0.000s 00:01:11.543 sys 0m0.000s 00:01:11.543 05:59:30 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:11.543 05:59:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.543 ************************************ 00:01:11.543 END TEST ubsan 00:01:11.543 ************************************ 00:01:11.543 05:59:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.543 05:59:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.543 05:59:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:11.543 05:59:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:11.543 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:11.543 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:11.808 Using 'verbs' RDMA provider 00:01:24.998 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:35.096 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:35.096 Creating mk/config.mk...done. 00:01:35.096 Creating mk/cc.flags.mk...done. 00:01:35.096 Type 'make' to build. 00:01:35.096 05:59:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:35.096 05:59:54 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:35.096 05:59:54 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:35.096 05:59:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.096 ************************************ 00:01:35.096 START TEST make 00:01:35.096 ************************************ 00:01:35.096 05:59:54 make -- common/autotest_common.sh@1127 -- $ make -j10 00:01:35.096 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:35.096 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:35.096 meson setup builddir \ 00:01:35.096 -Dwith-libaio=enabled \ 00:01:35.096 -Dwith-liburing=enabled \ 00:01:35.096 -Dwith-libvfn=disabled \ 00:01:35.096 -Dwith-spdk=disabled \ 00:01:35.096 -Dexamples=false \ 00:01:35.096 -Dtests=false \ 00:01:35.096 -Dtools=false && \ 00:01:35.096 meson compile -C builddir && \ 00:01:35.096 cd -) 00:01:35.096 make[1]: Nothing to be done for 'all'. 00:01:37.644 The Meson build system 00:01:37.644 Version: 1.5.0 00:01:37.644 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:37.644 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:37.644 Build type: native build 00:01:37.644 Project name: xnvme 00:01:37.645 Project version: 0.7.5 00:01:37.645 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:37.645 C linker for the host machine: cc ld.bfd 2.40-14 00:01:37.645 Host machine cpu family: x86_64 00:01:37.645 Host machine cpu: x86_64 00:01:37.645 Message: host_machine.system: linux 00:01:37.645 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:37.645 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:37.645 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.645 Run-time dependency threads found: YES 00:01:37.645 Has header "setupapi.h" : NO 00:01:37.645 Has header "linux/blkzoned.h" : YES 00:01:37.645 Has header "linux/blkzoned.h" : YES (cached) 00:01:37.645 Has header "libaio.h" : YES 00:01:37.645 Library aio found: YES 00:01:37.645 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:37.645 Run-time dependency liburing found: YES 2.2 00:01:37.645 Dependency libvfn skipped: feature with-libvfn disabled 00:01:37.645 Found CMake: /usr/bin/cmake (3.27.7) 00:01:37.645 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:37.645 Subproject spdk : skipped: feature with-spdk disabled 00:01:37.645 Run-time dependency appleframeworks found: NO (tried framework) 00:01:37.645 Run-time dependency appleframeworks found: NO (tried framework) 00:01:37.645 Library rt found: YES 00:01:37.645 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:37.645 Configuring xnvme_config.h using configuration 00:01:37.645 Configuring xnvme.spec using configuration 00:01:37.645 Run-time dependency bash-completion found: YES 2.11 00:01:37.645 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:37.645 Program cp found: YES (/usr/bin/cp) 00:01:37.645 Build targets in project: 3 00:01:37.645 00:01:37.645 xnvme 0.7.5 00:01:37.645 00:01:37.645 Subprojects 00:01:37.645 spdk : NO Feature 'with-spdk' disabled 00:01:37.645 00:01:37.645 User defined options 00:01:37.645 examples : false 00:01:37.645 tests : false 00:01:37.645 tools : false 00:01:37.645 with-libaio : enabled 00:01:37.645 with-liburing: enabled 00:01:37.645 with-libvfn : disabled 00:01:37.645 with-spdk : disabled 00:01:37.645 00:01:37.645 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.907 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:37.907 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:01:37.907 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:01:37.907 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:01:37.907 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:01:37.907 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:01:37.907 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:01:37.907 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:01:37.907 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:01:37.907 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:01:37.907 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:01:37.907 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:01:37.907 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:01:38.168 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:01:38.168 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:01:38.168 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:01:38.168 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:01:38.168 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:01:38.168 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:01:38.168 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:01:38.168 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:01:38.168 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:01:38.168 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:01:38.168 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:01:38.168 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:01:38.168 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:01:38.168 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:01:38.168 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:01:38.168 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:01:38.168 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:01:38.168 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:01:38.168 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:01:38.168 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:01:38.168 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:01:38.168 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:01:38.168 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:01:38.168 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:01:38.428 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:01:38.428 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:01:38.428 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:01:38.428 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:01:38.428 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:01:38.428 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:01:38.428 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:01:38.428 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:01:38.428 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:01:38.428 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:01:38.428 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:01:38.428 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:01:38.428 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:01:38.428 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:01:38.429 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:01:38.429 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:01:38.429 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:01:38.429 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:01:38.429 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:01:38.429 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:01:38.429 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:01:38.429 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:01:38.429 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:01:38.429 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:01:38.429 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:01:38.701 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:01:38.701 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:01:38.701 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:01:38.701 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:01:38.701 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:01:38.701 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:01:38.701 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:01:38.701 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:01:38.701 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:01:38.701 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:01:38.701 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:01:38.965 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:01:38.965 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:01:38.965 [75/76] Linking static target lib/libxnvme.a 00:01:38.965 [76/76] Linking target lib/libxnvme.so.0.7.5 00:01:38.965 INFO: autodetecting backend as ninja 00:01:38.965 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:39.227 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:45.809 The Meson build system 00:01:45.809 Version: 1.5.0 00:01:45.809 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:45.809 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:45.809 Build type: native build 00:01:45.809 Program cat found: YES (/usr/bin/cat) 00:01:45.809 Project name: DPDK 00:01:45.809 Project version: 24.03.0 00:01:45.809 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.809 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.809 Host machine cpu family: x86_64 00:01:45.809 Host machine cpu: x86_64 00:01:45.809 Message: ## Building in Developer Mode ## 00:01:45.809 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.809 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.809 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.809 Program python3 found: YES (/usr/bin/python3) 00:01:45.809 Program cat found: YES (/usr/bin/cat) 00:01:45.809 Compiler for C supports arguments -march=native: YES 00:01:45.809 Checking for size of "void *" : 8 00:01:45.809 Checking for size of "void *" : 8 (cached) 00:01:45.809 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:45.809 Library m found: YES 00:01:45.809 Library numa found: YES 00:01:45.809 Has header "numaif.h" : YES 00:01:45.809 Library fdt found: NO 00:01:45.809 Library execinfo found: NO 00:01:45.809 Has header "execinfo.h" : YES 00:01:45.809 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.809 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.809 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.809 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.809 Run-time dependency openssl found: YES 3.1.1 00:01:45.809 Run-time dependency libpcap found: YES 1.10.4 00:01:45.809 Has header "pcap.h" with dependency libpcap: YES 00:01:45.809 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.809 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.809 Compiler for C supports arguments -Wformat: YES 00:01:45.809 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.809 Compiler for C supports arguments -Wformat-security: NO 00:01:45.809 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.809 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.809 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.809 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.809 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.809 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.809 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.809 Compiler for C supports arguments -Wundef: YES 00:01:45.809 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.809 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.809 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.809 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.809 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.809 Program objdump found: YES (/usr/bin/objdump) 00:01:45.809 Compiler for C supports arguments -mavx512f: YES 00:01:45.809 Checking if "AVX512 checking" compiles: YES 00:01:45.809 Fetching value of define "__SSE4_2__" : 1 00:01:45.809 Fetching value of define "__AES__" : 1 00:01:45.809 Fetching value of define "__AVX__" : 1 00:01:45.809 Fetching value of define "__AVX2__" : 1 00:01:45.809 Fetching value of define "__AVX512BW__" : 1 00:01:45.809 Fetching value of define "__AVX512CD__" : 1 00:01:45.809 Fetching value of define "__AVX512DQ__" : 1 00:01:45.809 Fetching value of define "__AVX512F__" : 1 00:01:45.809 Fetching value of define "__AVX512VL__" : 1 00:01:45.809 Fetching value of define "__PCLMUL__" : 1 00:01:45.809 Fetching value of define "__RDRND__" : 1 00:01:45.809 Fetching value of define "__RDSEED__" : 1 00:01:45.809 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:45.809 Fetching value of define "__znver1__" : (undefined) 00:01:45.809 Fetching value of define "__znver2__" : (undefined) 00:01:45.809 Fetching value of define "__znver3__" : (undefined) 00:01:45.809 Fetching value of define "__znver4__" : (undefined) 00:01:45.809 Library asan found: YES 00:01:45.809 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.809 Message: lib/log: Defining dependency "log" 00:01:45.809 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.809 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.809 Library rt found: YES 00:01:45.809 Checking for function "getentropy" : NO 00:01:45.809 Message: lib/eal: Defining dependency "eal" 00:01:45.809 Message: lib/ring: Defining dependency "ring" 00:01:45.809 Message: lib/rcu: Defining dependency "rcu" 00:01:45.809 Message: lib/mempool: Defining dependency "mempool" 00:01:45.809 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.809 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.809 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.809 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.809 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.809 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.809 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:45.809 Compiler for C supports arguments -mpclmul: YES 00:01:45.809 Compiler for C supports arguments -maes: YES 00:01:45.809 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.809 Compiler for C supports arguments -mavx512bw: YES 00:01:45.809 Compiler for C supports arguments -mavx512dq: YES 00:01:45.809 Compiler for C supports arguments -mavx512vl: YES 00:01:45.809 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.809 Compiler for C supports arguments -mavx2: YES 00:01:45.809 Compiler for C supports arguments -mavx: YES 00:01:45.809 Message: lib/net: Defining dependency "net" 00:01:45.809 Message: lib/meter: Defining dependency "meter" 00:01:45.809 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.809 Message: lib/pci: Defining dependency "pci" 00:01:45.809 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.809 Message: lib/hash: Defining dependency "hash" 00:01:45.809 Message: lib/timer: Defining dependency "timer" 00:01:45.809 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.809 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.809 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.809 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.809 Message: lib/power: Defining dependency "power" 00:01:45.809 Message: lib/reorder: Defining dependency "reorder" 00:01:45.809 Message: lib/security: Defining dependency "security" 00:01:45.809 Has header "linux/userfaultfd.h" : YES 00:01:45.809 Has header "linux/vduse.h" : YES 00:01:45.809 Message: lib/vhost: Defining dependency "vhost" 00:01:45.810 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.810 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.810 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.810 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.810 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.810 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.810 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.810 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.810 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.810 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.810 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:45.810 Configuring doxy-api-html.conf using configuration 00:01:45.810 Configuring doxy-api-man.conf using configuration 00:01:45.810 Program mandb found: YES (/usr/bin/mandb) 00:01:45.810 Program sphinx-build found: NO 00:01:45.810 Configuring rte_build_config.h using configuration 00:01:45.810 Message: 00:01:45.810 ================= 00:01:45.810 Applications Enabled 00:01:45.810 ================= 00:01:45.810 00:01:45.810 apps: 00:01:45.810 00:01:45.810 00:01:45.810 Message: 00:01:45.810 ================= 00:01:45.810 Libraries Enabled 00:01:45.810 ================= 00:01:45.810 00:01:45.810 libs: 00:01:45.810 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.810 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.810 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.810 00:01:45.810 Message: 00:01:45.810 =============== 00:01:45.810 Drivers Enabled 00:01:45.810 =============== 00:01:45.810 00:01:45.810 common: 00:01:45.810 00:01:45.810 bus: 00:01:45.810 pci, vdev, 00:01:45.810 mempool: 00:01:45.810 ring, 00:01:45.810 dma: 00:01:45.810 00:01:45.810 net: 00:01:45.810 00:01:45.810 crypto: 00:01:45.810 00:01:45.810 compress: 00:01:45.810 00:01:45.810 vdpa: 00:01:45.810 00:01:45.810 00:01:45.810 Message: 00:01:45.810 ================= 00:01:45.810 Content Skipped 00:01:45.810 ================= 00:01:45.810 00:01:45.810 apps: 00:01:45.810 dumpcap: explicitly disabled via build config 00:01:45.810 graph: explicitly disabled via build config 00:01:45.810 pdump: explicitly disabled via build config 00:01:45.810 proc-info: explicitly disabled via build config 00:01:45.810 test-acl: explicitly disabled via build config 00:01:45.810 test-bbdev: explicitly disabled via build config 00:01:45.810 test-cmdline: explicitly disabled via build config 00:01:45.810 test-compress-perf: explicitly disabled via build config 00:01:45.810 test-crypto-perf: explicitly disabled via build config 00:01:45.810 test-dma-perf: explicitly disabled via build config 00:01:45.810 test-eventdev: explicitly disabled via build config 00:01:45.810 test-fib: explicitly disabled via build config 00:01:45.810 test-flow-perf: explicitly disabled via build config 00:01:45.810 test-gpudev: explicitly disabled via build config 00:01:45.810 test-mldev: explicitly disabled via build config 00:01:45.810 test-pipeline: explicitly disabled via build config 00:01:45.810 test-pmd: explicitly disabled via build config 00:01:45.810 test-regex: explicitly disabled via build config 00:01:45.810 test-sad: explicitly disabled via build config 00:01:45.810 test-security-perf: explicitly disabled via build config 00:01:45.810 00:01:45.810 libs: 00:01:45.810 argparse: explicitly disabled via build config 00:01:45.810 metrics: explicitly disabled via build config 00:01:45.810 acl: explicitly disabled via build config 00:01:45.810 bbdev: explicitly disabled via build config 00:01:45.810 bitratestats: explicitly disabled via build config 00:01:45.810 bpf: explicitly disabled via build config 00:01:45.810 cfgfile: explicitly disabled via build config 00:01:45.810 distributor: explicitly disabled via build config 00:01:45.810 efd: explicitly disabled via build config 00:01:45.810 eventdev: explicitly disabled via build config 00:01:45.810 dispatcher: explicitly disabled via build config 00:01:45.810 gpudev: explicitly disabled via build config 00:01:45.810 gro: explicitly disabled via build config 00:01:45.810 gso: explicitly disabled via build config 00:01:45.810 ip_frag: explicitly disabled via build config 00:01:45.810 jobstats: explicitly disabled via build config 00:01:45.810 latencystats: explicitly disabled via build config 00:01:45.810 lpm: explicitly disabled via build config 00:01:45.810 member: explicitly disabled via build config 00:01:45.810 pcapng: explicitly disabled via build config 00:01:45.810 rawdev: explicitly disabled via build config 00:01:45.810 regexdev: explicitly disabled via build config 00:01:45.810 mldev: explicitly disabled via build config 00:01:45.810 rib: explicitly disabled via build config 00:01:45.810 sched: explicitly disabled via build config 00:01:45.810 stack: explicitly disabled via build config 00:01:45.810 ipsec: explicitly disabled via build config 00:01:45.810 pdcp: explicitly disabled via build config 00:01:45.810 fib: explicitly disabled via build config 00:01:45.810 port: explicitly disabled via build config 00:01:45.810 pdump: explicitly disabled via build config 00:01:45.810 table: explicitly disabled via build config 00:01:45.810 pipeline: explicitly disabled via build config 00:01:45.810 graph: explicitly disabled via build config 00:01:45.810 node: explicitly disabled via build config 00:01:45.810 00:01:45.810 drivers: 00:01:45.810 common/cpt: not in enabled drivers build config 00:01:45.810 common/dpaax: not in enabled drivers build config 00:01:45.810 common/iavf: not in enabled drivers build config 00:01:45.810 common/idpf: not in enabled drivers build config 00:01:45.810 common/ionic: not in enabled drivers build config 00:01:45.810 common/mvep: not in enabled drivers build config 00:01:45.810 common/octeontx: not in enabled drivers build config 00:01:45.810 bus/auxiliary: not in enabled drivers build config 00:01:45.810 bus/cdx: not in enabled drivers build config 00:01:45.810 bus/dpaa: not in enabled drivers build config 00:01:45.810 bus/fslmc: not in enabled drivers build config 00:01:45.810 bus/ifpga: not in enabled drivers build config 00:01:45.810 bus/platform: not in enabled drivers build config 00:01:45.810 bus/uacce: not in enabled drivers build config 00:01:45.810 bus/vmbus: not in enabled drivers build config 00:01:45.810 common/cnxk: not in enabled drivers build config 00:01:45.810 common/mlx5: not in enabled drivers build config 00:01:45.810 common/nfp: not in enabled drivers build config 00:01:45.810 common/nitrox: not in enabled drivers build config 00:01:45.810 common/qat: not in enabled drivers build config 00:01:45.810 common/sfc_efx: not in enabled drivers build config 00:01:45.810 mempool/bucket: not in enabled drivers build config 00:01:45.810 mempool/cnxk: not in enabled drivers build config 00:01:45.810 mempool/dpaa: not in enabled drivers build config 00:01:45.810 mempool/dpaa2: not in enabled drivers build config 00:01:45.810 mempool/octeontx: not in enabled drivers build config 00:01:45.810 mempool/stack: not in enabled drivers build config 00:01:45.810 dma/cnxk: not in enabled drivers build config 00:01:45.810 dma/dpaa: not in enabled drivers build config 00:01:45.810 dma/dpaa2: not in enabled drivers build config 00:01:45.810 dma/hisilicon: not in enabled drivers build config 00:01:45.810 dma/idxd: not in enabled drivers build config 00:01:45.810 dma/ioat: not in enabled drivers build config 00:01:45.810 dma/skeleton: not in enabled drivers build config 00:01:45.810 net/af_packet: not in enabled drivers build config 00:01:45.810 net/af_xdp: not in enabled drivers build config 00:01:45.810 net/ark: not in enabled drivers build config 00:01:45.810 net/atlantic: not in enabled drivers build config 00:01:45.810 net/avp: not in enabled drivers build config 00:01:45.810 net/axgbe: not in enabled drivers build config 00:01:45.810 net/bnx2x: not in enabled drivers build config 00:01:45.810 net/bnxt: not in enabled drivers build config 00:01:45.810 net/bonding: not in enabled drivers build config 00:01:45.810 net/cnxk: not in enabled drivers build config 00:01:45.810 net/cpfl: not in enabled drivers build config 00:01:45.810 net/cxgbe: not in enabled drivers build config 00:01:45.810 net/dpaa: not in enabled drivers build config 00:01:45.810 net/dpaa2: not in enabled drivers build config 00:01:45.810 net/e1000: not in enabled drivers build config 00:01:45.810 net/ena: not in enabled drivers build config 00:01:45.810 net/enetc: not in enabled drivers build config 00:01:45.810 net/enetfec: not in enabled drivers build config 00:01:45.810 net/enic: not in enabled drivers build config 00:01:45.810 net/failsafe: not in enabled drivers build config 00:01:45.810 net/fm10k: not in enabled drivers build config 00:01:45.810 net/gve: not in enabled drivers build config 00:01:45.810 net/hinic: not in enabled drivers build config 00:01:45.810 net/hns3: not in enabled drivers build config 00:01:45.810 net/i40e: not in enabled drivers build config 00:01:45.810 net/iavf: not in enabled drivers build config 00:01:45.810 net/ice: not in enabled drivers build config 00:01:45.810 net/idpf: not in enabled drivers build config 00:01:45.810 net/igc: not in enabled drivers build config 00:01:45.810 net/ionic: not in enabled drivers build config 00:01:45.810 net/ipn3ke: not in enabled drivers build config 00:01:45.810 net/ixgbe: not in enabled drivers build config 00:01:45.810 net/mana: not in enabled drivers build config 00:01:45.810 net/memif: not in enabled drivers build config 00:01:45.810 net/mlx4: not in enabled drivers build config 00:01:45.810 net/mlx5: not in enabled drivers build config 00:01:45.810 net/mvneta: not in enabled drivers build config 00:01:45.810 net/mvpp2: not in enabled drivers build config 00:01:45.810 net/netvsc: not in enabled drivers build config 00:01:45.810 net/nfb: not in enabled drivers build config 00:01:45.810 net/nfp: not in enabled drivers build config 00:01:45.810 net/ngbe: not in enabled drivers build config 00:01:45.810 net/null: not in enabled drivers build config 00:01:45.810 net/octeontx: not in enabled drivers build config 00:01:45.810 net/octeon_ep: not in enabled drivers build config 00:01:45.811 net/pcap: not in enabled drivers build config 00:01:45.811 net/pfe: not in enabled drivers build config 00:01:45.811 net/qede: not in enabled drivers build config 00:01:45.811 net/ring: not in enabled drivers build config 00:01:45.811 net/sfc: not in enabled drivers build config 00:01:45.811 net/softnic: not in enabled drivers build config 00:01:45.811 net/tap: not in enabled drivers build config 00:01:45.811 net/thunderx: not in enabled drivers build config 00:01:45.811 net/txgbe: not in enabled drivers build config 00:01:45.811 net/vdev_netvsc: not in enabled drivers build config 00:01:45.811 net/vhost: not in enabled drivers build config 00:01:45.811 net/virtio: not in enabled drivers build config 00:01:45.811 net/vmxnet3: not in enabled drivers build config 00:01:45.811 raw/*: missing internal dependency, "rawdev" 00:01:45.811 crypto/armv8: not in enabled drivers build config 00:01:45.811 crypto/bcmfs: not in enabled drivers build config 00:01:45.811 crypto/caam_jr: not in enabled drivers build config 00:01:45.811 crypto/ccp: not in enabled drivers build config 00:01:45.811 crypto/cnxk: not in enabled drivers build config 00:01:45.811 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.811 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.811 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.811 crypto/mlx5: not in enabled drivers build config 00:01:45.811 crypto/mvsam: not in enabled drivers build config 00:01:45.811 crypto/nitrox: not in enabled drivers build config 00:01:45.811 crypto/null: not in enabled drivers build config 00:01:45.811 crypto/octeontx: not in enabled drivers build config 00:01:45.811 crypto/openssl: not in enabled drivers build config 00:01:45.811 crypto/scheduler: not in enabled drivers build config 00:01:45.811 crypto/uadk: not in enabled drivers build config 00:01:45.811 crypto/virtio: not in enabled drivers build config 00:01:45.811 compress/isal: not in enabled drivers build config 00:01:45.811 compress/mlx5: not in enabled drivers build config 00:01:45.811 compress/nitrox: not in enabled drivers build config 00:01:45.811 compress/octeontx: not in enabled drivers build config 00:01:45.811 compress/zlib: not in enabled drivers build config 00:01:45.811 regex/*: missing internal dependency, "regexdev" 00:01:45.811 ml/*: missing internal dependency, "mldev" 00:01:45.811 vdpa/ifc: not in enabled drivers build config 00:01:45.811 vdpa/mlx5: not in enabled drivers build config 00:01:45.811 vdpa/nfp: not in enabled drivers build config 00:01:45.811 vdpa/sfc: not in enabled drivers build config 00:01:45.811 event/*: missing internal dependency, "eventdev" 00:01:45.811 baseband/*: missing internal dependency, "bbdev" 00:01:45.811 gpu/*: missing internal dependency, "gpudev" 00:01:45.811 00:01:45.811 00:01:46.073 Build targets in project: 84 00:01:46.073 00:01:46.073 DPDK 24.03.0 00:01:46.073 00:01:46.073 User defined options 00:01:46.073 buildtype : debug 00:01:46.073 default_library : shared 00:01:46.073 libdir : lib 00:01:46.073 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:46.073 b_sanitize : address 00:01:46.073 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.073 c_link_args : 00:01:46.073 cpu_instruction_set: native 00:01:46.073 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:46.073 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:46.073 enable_docs : false 00:01:46.073 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.073 enable_kmods : false 00:01:46.073 max_lcores : 128 00:01:46.073 tests : false 00:01:46.073 00:01:46.073 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.647 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:46.647 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.647 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.647 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.647 [4/267] Linking static target lib/librte_kvargs.a 00:01:46.647 [5/267] Linking static target lib/librte_log.a 00:01:46.647 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.223 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.223 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.223 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.223 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.223 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.223 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.223 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.223 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.223 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.223 [16/267] Linking static target lib/librte_telemetry.a 00:01:47.223 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.223 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.521 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.521 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.521 [21/267] Linking target lib/librte_log.so.24.1 00:01:47.521 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.782 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.782 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.782 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.782 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.782 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.782 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.782 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.782 [30/267] Linking target lib/librte_kvargs.so.24.1 00:01:47.782 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.044 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.044 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.044 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.044 [35/267] Linking target lib/librte_telemetry.so.24.1 00:01:48.044 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.044 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.304 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.304 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.304 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.304 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.304 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.304 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.304 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.304 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.565 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.565 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.565 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.565 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.827 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.827 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.827 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.827 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.827 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.089 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.089 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.089 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.089 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.351 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.351 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.351 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.351 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.351 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.351 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.351 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.613 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.613 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.613 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.613 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.873 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.873 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.873 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.873 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.873 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.873 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.133 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.133 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.133 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.133 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.133 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.392 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.392 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.392 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.652 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:50.652 [85/267] Linking static target lib/librte_eal.a 00:01:50.652 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.652 [87/267] Linking static target lib/librte_ring.a 00:01:50.652 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:50.912 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:50.912 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.912 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:50.912 [92/267] Linking static target lib/librte_mempool.a 00:01:50.912 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:50.912 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.172 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.172 [96/267] Linking static target lib/librte_rcu.a 00:01:51.172 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.172 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.172 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.431 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.431 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.431 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.431 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.690 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.691 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.691 [106/267] Linking static target lib/librte_meter.a 00:01:51.691 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:51.691 [108/267] Linking static target lib/librte_net.a 00:01:51.951 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.951 [110/267] Linking static target lib/librte_mbuf.a 00:01:51.951 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.951 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.951 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.951 [114/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.951 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.211 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.211 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.471 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.471 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.732 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.732 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.992 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.992 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.992 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.992 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.992 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.254 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.254 [128/267] Linking static target lib/librte_pci.a 00:01:53.254 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.254 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.254 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.254 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.254 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.515 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.515 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.515 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.515 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.515 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.515 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.515 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.515 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.515 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.515 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.777 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.777 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.777 [146/267] Linking static target lib/librte_cmdline.a 00:01:53.777 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.777 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.038 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.038 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.038 [151/267] Linking static target lib/librte_timer.a 00:01:54.038 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.300 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.300 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.562 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.562 [156/267] Linking static target lib/librte_compressdev.a 00:01:54.562 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.562 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.822 [159/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.822 [160/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.822 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.822 [162/267] Linking static target lib/librte_ethdev.a 00:01:54.822 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.082 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.082 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.082 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.082 [167/267] Linking static target lib/librte_dmadev.a 00:01:55.082 [168/267] Linking static target lib/librte_hash.a 00:01:55.082 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.341 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.341 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.341 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.341 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.599 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.599 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.599 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.599 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.856 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.856 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.856 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.856 [181/267] Linking static target lib/librte_cryptodev.a 00:01:55.856 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.856 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.856 [184/267] Linking static target lib/librte_power.a 00:01:56.113 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.113 [186/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.113 [187/267] Linking static target lib/librte_reorder.a 00:01:56.369 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.369 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.369 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.369 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.369 [192/267] Linking static target lib/librte_security.a 00:01:56.626 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.883 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.140 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.140 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.140 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.140 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.397 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.397 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.397 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.397 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.655 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.655 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.912 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.912 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.912 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.912 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.912 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.912 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.170 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.170 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.170 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.170 [214/267] Linking static target drivers/librte_bus_vdev.a 00:01:58.170 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.170 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.170 [217/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.170 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.170 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.170 [220/267] Linking static target drivers/librte_bus_pci.a 00:01:58.427 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.427 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.427 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.427 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.427 [225/267] Linking static target drivers/librte_mempool_ring.a 00:01:58.686 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.686 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.630 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.630 [229/267] Linking target lib/librte_eal.so.24.1 00:01:59.630 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.630 [231/267] Linking target lib/librte_pci.so.24.1 00:01:59.630 [232/267] Linking target lib/librte_meter.so.24.1 00:01:59.630 [233/267] Linking target lib/librte_dmadev.so.24.1 00:01:59.888 [234/267] Linking target lib/librte_ring.so.24.1 00:01:59.888 [235/267] Linking target lib/librte_timer.so.24.1 00:01:59.888 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.888 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.888 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.888 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.888 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.888 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.888 [242/267] Linking target lib/librte_rcu.so.24.1 00:01:59.888 [243/267] Linking target lib/librte_mempool.so.24.1 00:01:59.888 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.888 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.888 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.145 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.145 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:00.145 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:00.145 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:00.145 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:00.145 [252/267] Linking target lib/librte_net.so.24.1 00:02:00.145 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:00.402 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:00.402 [255/267] Linking target lib/librte_hash.so.24.1 00:02:00.402 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:00.402 [257/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:00.402 [258/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.402 [259/267] Linking target lib/librte_security.so.24.1 00:02:00.966 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.966 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:00.966 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:00.966 [263/267] Linking target lib/librte_power.so.24.1 00:02:01.531 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.531 [265/267] Linking static target lib/librte_vhost.a 00:02:02.906 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.906 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:02.906 INFO: autodetecting backend as ninja 00:02:02.906 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:17.808 CC lib/ut/ut.o 00:02:17.808 CC lib/ut_mock/mock.o 00:02:17.808 CC lib/log/log_flags.o 00:02:17.808 CC lib/log/log_deprecated.o 00:02:17.808 CC lib/log/log.o 00:02:17.808 LIB libspdk_ut_mock.a 00:02:17.808 LIB libspdk_ut.a 00:02:17.808 LIB libspdk_log.a 00:02:17.808 SO libspdk_ut_mock.so.6.0 00:02:17.808 SO libspdk_ut.so.2.0 00:02:17.808 SO libspdk_log.so.7.1 00:02:17.808 SYMLINK libspdk_ut_mock.so 00:02:17.808 SYMLINK libspdk_ut.so 00:02:17.808 SYMLINK libspdk_log.so 00:02:18.065 CC lib/dma/dma.o 00:02:18.065 CC lib/util/base64.o 00:02:18.065 CC lib/util/bit_array.o 00:02:18.065 CC lib/util/crc16.o 00:02:18.065 CXX lib/trace_parser/trace.o 00:02:18.065 CC lib/util/crc32.o 00:02:18.065 CC lib/util/cpuset.o 00:02:18.065 CC lib/util/crc32c.o 00:02:18.065 CC lib/ioat/ioat.o 00:02:18.065 CC lib/vfio_user/host/vfio_user_pci.o 00:02:18.065 CC lib/util/crc32_ieee.o 00:02:18.065 CC lib/vfio_user/host/vfio_user.o 00:02:18.065 CC lib/util/crc64.o 00:02:18.065 CC lib/util/dif.o 00:02:18.065 LIB libspdk_dma.a 00:02:18.065 CC lib/util/fd.o 00:02:18.065 CC lib/util/fd_group.o 00:02:18.065 SO libspdk_dma.so.5.0 00:02:18.065 CC lib/util/file.o 00:02:18.322 CC lib/util/hexlify.o 00:02:18.322 SYMLINK libspdk_dma.so 00:02:18.322 CC lib/util/iov.o 00:02:18.322 LIB libspdk_ioat.a 00:02:18.322 CC lib/util/math.o 00:02:18.322 SO libspdk_ioat.so.7.0 00:02:18.322 CC lib/util/net.o 00:02:18.322 LIB libspdk_vfio_user.a 00:02:18.322 SO libspdk_vfio_user.so.5.0 00:02:18.322 SYMLINK libspdk_ioat.so 00:02:18.322 CC lib/util/pipe.o 00:02:18.322 CC lib/util/strerror_tls.o 00:02:18.322 CC lib/util/string.o 00:02:18.322 SYMLINK libspdk_vfio_user.so 00:02:18.322 CC lib/util/uuid.o 00:02:18.322 CC lib/util/xor.o 00:02:18.322 CC lib/util/zipf.o 00:02:18.322 CC lib/util/md5.o 00:02:18.885 LIB libspdk_util.a 00:02:18.885 SO libspdk_util.so.10.1 00:02:18.885 LIB libspdk_trace_parser.a 00:02:18.885 SO libspdk_trace_parser.so.6.0 00:02:18.885 SYMLINK libspdk_util.so 00:02:19.143 SYMLINK libspdk_trace_parser.so 00:02:19.143 CC lib/rdma_utils/rdma_utils.o 00:02:19.143 CC lib/idxd/idxd.o 00:02:19.143 CC lib/vmd/vmd.o 00:02:19.143 CC lib/vmd/led.o 00:02:19.143 CC lib/idxd/idxd_user.o 00:02:19.143 CC lib/idxd/idxd_kernel.o 00:02:19.143 CC lib/conf/conf.o 00:02:19.143 CC lib/env_dpdk/memory.o 00:02:19.143 CC lib/env_dpdk/env.o 00:02:19.143 CC lib/json/json_parse.o 00:02:19.143 CC lib/json/json_util.o 00:02:19.143 CC lib/json/json_write.o 00:02:19.143 LIB libspdk_conf.a 00:02:19.399 SO libspdk_conf.so.6.0 00:02:19.399 CC lib/env_dpdk/pci.o 00:02:19.399 CC lib/env_dpdk/init.o 00:02:19.399 LIB libspdk_rdma_utils.a 00:02:19.399 SO libspdk_rdma_utils.so.1.0 00:02:19.399 SYMLINK libspdk_conf.so 00:02:19.399 CC lib/env_dpdk/threads.o 00:02:19.399 SYMLINK libspdk_rdma_utils.so 00:02:19.399 CC lib/env_dpdk/pci_ioat.o 00:02:19.399 CC lib/env_dpdk/pci_virtio.o 00:02:19.399 LIB libspdk_json.a 00:02:19.399 CC lib/env_dpdk/pci_vmd.o 00:02:19.399 CC lib/env_dpdk/pci_idxd.o 00:02:19.399 SO libspdk_json.so.6.0 00:02:19.399 CC lib/env_dpdk/pci_event.o 00:02:19.655 SYMLINK libspdk_json.so 00:02:19.655 LIB libspdk_idxd.a 00:02:19.655 CC lib/env_dpdk/sigbus_handler.o 00:02:19.655 SO libspdk_idxd.so.12.1 00:02:19.655 CC lib/env_dpdk/pci_dpdk.o 00:02:19.655 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:19.655 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:19.655 SYMLINK libspdk_idxd.so 00:02:19.655 CC lib/rdma_provider/common.o 00:02:19.655 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:19.655 LIB libspdk_vmd.a 00:02:19.655 SO libspdk_vmd.so.6.0 00:02:19.911 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.911 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.911 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.911 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.911 SYMLINK libspdk_vmd.so 00:02:19.911 LIB libspdk_rdma_provider.a 00:02:19.911 SO libspdk_rdma_provider.so.7.0 00:02:19.911 SYMLINK libspdk_rdma_provider.so 00:02:19.911 LIB libspdk_jsonrpc.a 00:02:20.172 SO libspdk_jsonrpc.so.6.0 00:02:20.172 SYMLINK libspdk_jsonrpc.so 00:02:20.428 CC lib/rpc/rpc.o 00:02:20.428 LIB libspdk_env_dpdk.a 00:02:20.428 SO libspdk_env_dpdk.so.15.1 00:02:20.685 LIB libspdk_rpc.a 00:02:20.685 SO libspdk_rpc.so.6.0 00:02:20.685 SYMLINK libspdk_env_dpdk.so 00:02:20.685 SYMLINK libspdk_rpc.so 00:02:20.943 CC lib/keyring/keyring_rpc.o 00:02:20.943 CC lib/keyring/keyring.o 00:02:20.943 CC lib/notify/notify.o 00:02:20.943 CC lib/notify/notify_rpc.o 00:02:20.943 CC lib/trace/trace.o 00:02:20.943 CC lib/trace/trace_flags.o 00:02:20.943 CC lib/trace/trace_rpc.o 00:02:20.943 LIB libspdk_notify.a 00:02:20.943 SO libspdk_notify.so.6.0 00:02:20.943 SYMLINK libspdk_notify.so 00:02:20.943 LIB libspdk_keyring.a 00:02:20.943 SO libspdk_keyring.so.2.0 00:02:20.943 LIB libspdk_trace.a 00:02:21.201 SO libspdk_trace.so.11.0 00:02:21.201 SYMLINK libspdk_keyring.so 00:02:21.201 SYMLINK libspdk_trace.so 00:02:21.458 CC lib/sock/sock.o 00:02:21.458 CC lib/sock/sock_rpc.o 00:02:21.458 CC lib/thread/thread.o 00:02:21.458 CC lib/thread/iobuf.o 00:02:21.715 LIB libspdk_sock.a 00:02:21.715 SO libspdk_sock.so.10.0 00:02:21.972 SYMLINK libspdk_sock.so 00:02:21.972 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.972 CC lib/nvme/nvme_pcie_common.o 00:02:21.972 CC lib/nvme/nvme_ctrlr.o 00:02:21.972 CC lib/nvme/nvme_fabric.o 00:02:21.972 CC lib/nvme/nvme.o 00:02:21.972 CC lib/nvme/nvme_ns_cmd.o 00:02:21.972 CC lib/nvme/nvme_qpair.o 00:02:21.972 CC lib/nvme/nvme_pcie.o 00:02:21.972 CC lib/nvme/nvme_ns.o 00:02:22.545 CC lib/nvme/nvme_quirks.o 00:02:22.803 CC lib/nvme/nvme_transport.o 00:02:22.803 CC lib/nvme/nvme_discovery.o 00:02:22.803 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.803 LIB libspdk_thread.a 00:02:22.803 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.803 CC lib/nvme/nvme_tcp.o 00:02:22.803 SO libspdk_thread.so.11.0 00:02:22.803 CC lib/nvme/nvme_opal.o 00:02:23.061 SYMLINK libspdk_thread.so 00:02:23.061 CC lib/nvme/nvme_io_msg.o 00:02:23.061 CC lib/nvme/nvme_poll_group.o 00:02:23.061 CC lib/nvme/nvme_zns.o 00:02:23.320 CC lib/nvme/nvme_stubs.o 00:02:23.320 CC lib/nvme/nvme_auth.o 00:02:23.320 CC lib/nvme/nvme_cuse.o 00:02:23.320 CC lib/nvme/nvme_rdma.o 00:02:23.578 CC lib/accel/accel.o 00:02:23.578 CC lib/blob/blobstore.o 00:02:23.578 CC lib/accel/accel_rpc.o 00:02:23.836 CC lib/accel/accel_sw.o 00:02:23.836 CC lib/init/json_config.o 00:02:23.836 CC lib/virtio/virtio.o 00:02:24.093 CC lib/init/subsystem.o 00:02:24.093 CC lib/init/subsystem_rpc.o 00:02:24.093 CC lib/init/rpc.o 00:02:24.093 CC lib/virtio/virtio_vhost_user.o 00:02:24.352 CC lib/blob/request.o 00:02:24.352 CC lib/blob/zeroes.o 00:02:24.352 CC lib/blob/blob_bs_dev.o 00:02:24.352 LIB libspdk_init.a 00:02:24.352 SO libspdk_init.so.6.0 00:02:24.352 CC lib/fsdev/fsdev.o 00:02:24.352 SYMLINK libspdk_init.so 00:02:24.352 CC lib/virtio/virtio_vfio_user.o 00:02:24.352 CC lib/virtio/virtio_pci.o 00:02:24.352 CC lib/fsdev/fsdev_io.o 00:02:24.612 CC lib/fsdev/fsdev_rpc.o 00:02:24.612 CC lib/event/app.o 00:02:24.612 CC lib/event/reactor.o 00:02:24.612 CC lib/event/log_rpc.o 00:02:24.612 CC lib/event/app_rpc.o 00:02:24.612 LIB libspdk_accel.a 00:02:24.612 LIB libspdk_virtio.a 00:02:24.612 SO libspdk_accel.so.16.0 00:02:24.612 LIB libspdk_nvme.a 00:02:24.612 SO libspdk_virtio.so.7.0 00:02:24.872 CC lib/event/scheduler_static.o 00:02:24.872 SYMLINK libspdk_accel.so 00:02:24.872 SYMLINK libspdk_virtio.so 00:02:24.872 SO libspdk_nvme.so.15.0 00:02:24.872 CC lib/bdev/bdev.o 00:02:24.872 CC lib/bdev/bdev_rpc.o 00:02:24.872 CC lib/bdev/part.o 00:02:24.872 CC lib/bdev/bdev_zone.o 00:02:24.872 CC lib/bdev/scsi_nvme.o 00:02:24.872 LIB libspdk_event.a 00:02:25.131 LIB libspdk_fsdev.a 00:02:25.131 SO libspdk_event.so.14.0 00:02:25.131 SO libspdk_fsdev.so.2.0 00:02:25.131 SYMLINK libspdk_event.so 00:02:25.131 SYMLINK libspdk_nvme.so 00:02:25.131 SYMLINK libspdk_fsdev.so 00:02:25.389 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:25.956 LIB libspdk_fuse_dispatcher.a 00:02:25.956 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.956 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.894 LIB libspdk_blob.a 00:02:26.894 SO libspdk_blob.so.11.0 00:02:27.152 SYMLINK libspdk_blob.so 00:02:27.152 CC lib/lvol/lvol.o 00:02:27.410 CC lib/blobfs/blobfs.o 00:02:27.410 CC lib/blobfs/tree.o 00:02:27.669 LIB libspdk_bdev.a 00:02:27.669 SO libspdk_bdev.so.17.0 00:02:27.927 SYMLINK libspdk_bdev.so 00:02:27.927 CC lib/ftl/ftl_core.o 00:02:27.927 CC lib/ftl/ftl_init.o 00:02:27.927 CC lib/ftl/ftl_layout.o 00:02:27.927 CC lib/ftl/ftl_debug.o 00:02:27.927 CC lib/scsi/dev.o 00:02:27.927 CC lib/nvmf/ctrlr.o 00:02:27.927 CC lib/nbd/nbd.o 00:02:27.927 CC lib/ublk/ublk.o 00:02:28.185 LIB libspdk_blobfs.a 00:02:28.185 SO libspdk_blobfs.so.10.0 00:02:28.185 CC lib/ublk/ublk_rpc.o 00:02:28.185 CC lib/nvmf/ctrlr_discovery.o 00:02:28.185 LIB libspdk_lvol.a 00:02:28.185 SYMLINK libspdk_blobfs.so 00:02:28.185 CC lib/nvmf/ctrlr_bdev.o 00:02:28.185 CC lib/scsi/lun.o 00:02:28.185 SO libspdk_lvol.so.10.0 00:02:28.185 SYMLINK libspdk_lvol.so 00:02:28.185 CC lib/scsi/port.o 00:02:28.443 CC lib/nvmf/subsystem.o 00:02:28.443 CC lib/scsi/scsi.o 00:02:28.443 CC lib/ftl/ftl_io.o 00:02:28.443 CC lib/nbd/nbd_rpc.o 00:02:28.443 CC lib/ftl/ftl_sb.o 00:02:28.443 CC lib/ftl/ftl_l2p.o 00:02:28.443 CC lib/scsi/scsi_bdev.o 00:02:28.443 LIB libspdk_ublk.a 00:02:28.443 SO libspdk_ublk.so.3.0 00:02:28.701 LIB libspdk_nbd.a 00:02:28.701 SYMLINK libspdk_ublk.so 00:02:28.701 CC lib/scsi/scsi_pr.o 00:02:28.701 SO libspdk_nbd.so.7.0 00:02:28.701 CC lib/ftl/ftl_l2p_flat.o 00:02:28.701 CC lib/ftl/ftl_nv_cache.o 00:02:28.701 SYMLINK libspdk_nbd.so 00:02:28.701 CC lib/ftl/ftl_band.o 00:02:28.701 CC lib/nvmf/nvmf.o 00:02:28.701 CC lib/nvmf/nvmf_rpc.o 00:02:28.701 CC lib/ftl/ftl_band_ops.o 00:02:28.958 CC lib/ftl/ftl_writer.o 00:02:28.958 CC lib/ftl/ftl_rq.o 00:02:28.958 CC lib/scsi/scsi_rpc.o 00:02:28.958 CC lib/ftl/ftl_reloc.o 00:02:28.958 CC lib/ftl/ftl_l2p_cache.o 00:02:29.215 CC lib/scsi/task.o 00:02:29.215 CC lib/ftl/ftl_p2l.o 00:02:29.215 CC lib/ftl/ftl_p2l_log.o 00:02:29.215 LIB libspdk_scsi.a 00:02:29.473 SO libspdk_scsi.so.9.0 00:02:29.473 CC lib/nvmf/transport.o 00:02:29.473 CC lib/nvmf/tcp.o 00:02:29.473 SYMLINK libspdk_scsi.so 00:02:29.473 CC lib/nvmf/stubs.o 00:02:29.473 CC lib/nvmf/mdns_server.o 00:02:29.473 CC lib/nvmf/rdma.o 00:02:29.473 CC lib/nvmf/auth.o 00:02:29.729 CC lib/ftl/mngt/ftl_mngt.o 00:02:29.729 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:29.729 CC lib/iscsi/conn.o 00:02:29.729 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:29.729 CC lib/vhost/vhost.o 00:02:29.985 CC lib/iscsi/init_grp.o 00:02:29.985 CC lib/iscsi/iscsi.o 00:02:29.985 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:29.985 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:29.985 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:29.985 CC lib/iscsi/param.o 00:02:29.985 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.243 CC lib/iscsi/portal_grp.o 00:02:30.243 CC lib/vhost/vhost_rpc.o 00:02:30.243 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.243 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.243 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.500 CC lib/iscsi/tgt_node.o 00:02:30.500 CC lib/iscsi/iscsi_subsystem.o 00:02:30.500 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.500 CC lib/vhost/vhost_scsi.o 00:02:30.500 CC lib/vhost/vhost_blk.o 00:02:30.756 CC lib/vhost/rte_vhost_user.o 00:02:30.756 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.756 CC lib/iscsi/iscsi_rpc.o 00:02:30.756 CC lib/iscsi/task.o 00:02:31.014 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.014 CC lib/ftl/utils/ftl_conf.o 00:02:31.014 CC lib/ftl/utils/ftl_md.o 00:02:31.014 CC lib/ftl/utils/ftl_mempool.o 00:02:31.271 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.271 CC lib/ftl/utils/ftl_property.o 00:02:31.271 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.271 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.271 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.271 LIB libspdk_iscsi.a 00:02:31.271 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.271 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.271 SO libspdk_iscsi.so.8.0 00:02:31.529 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.529 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:31.529 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.529 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.529 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.529 SYMLINK libspdk_iscsi.so 00:02:31.529 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.529 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:31.529 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:31.529 CC lib/ftl/base/ftl_base_dev.o 00:02:31.529 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.529 CC lib/ftl/ftl_trace.o 00:02:31.786 LIB libspdk_vhost.a 00:02:31.786 SO libspdk_vhost.so.8.0 00:02:31.786 LIB libspdk_nvmf.a 00:02:31.786 SYMLINK libspdk_vhost.so 00:02:31.786 LIB libspdk_ftl.a 00:02:31.786 SO libspdk_nvmf.so.20.0 00:02:32.062 SO libspdk_ftl.so.9.0 00:02:32.062 SYMLINK libspdk_nvmf.so 00:02:32.377 SYMLINK libspdk_ftl.so 00:02:32.635 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.635 CC module/accel/ioat/accel_ioat.o 00:02:32.635 CC module/sock/posix/posix.o 00:02:32.635 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.635 CC module/accel/iaa/accel_iaa.o 00:02:32.635 CC module/accel/dsa/accel_dsa.o 00:02:32.635 CC module/keyring/file/keyring.o 00:02:32.635 CC module/accel/error/accel_error.o 00:02:32.635 CC module/blob/bdev/blob_bdev.o 00:02:32.635 CC module/fsdev/aio/fsdev_aio.o 00:02:32.635 LIB libspdk_env_dpdk_rpc.a 00:02:32.635 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.635 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.635 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.635 CC module/keyring/file/keyring_rpc.o 00:02:32.891 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.891 LIB libspdk_scheduler_dynamic.a 00:02:32.891 CC module/accel/error/accel_error_rpc.o 00:02:32.891 SO libspdk_scheduler_dynamic.so.4.0 00:02:32.891 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.891 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.891 LIB libspdk_keyring_file.a 00:02:32.891 LIB libspdk_accel_dsa.a 00:02:32.891 SO libspdk_keyring_file.so.2.0 00:02:32.891 LIB libspdk_accel_ioat.a 00:02:32.891 LIB libspdk_blob_bdev.a 00:02:32.891 SO libspdk_accel_dsa.so.5.0 00:02:32.891 LIB libspdk_accel_error.a 00:02:32.891 SO libspdk_blob_bdev.so.11.0 00:02:32.891 SO libspdk_accel_ioat.so.6.0 00:02:32.891 LIB libspdk_accel_iaa.a 00:02:32.891 SO libspdk_accel_error.so.2.0 00:02:32.891 SYMLINK libspdk_keyring_file.so 00:02:32.891 SO libspdk_accel_iaa.so.3.0 00:02:32.891 SYMLINK libspdk_accel_dsa.so 00:02:32.891 SYMLINK libspdk_blob_bdev.so 00:02:32.891 SYMLINK libspdk_accel_ioat.so 00:02:32.891 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.891 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.891 CC module/keyring/linux/keyring.o 00:02:32.891 SYMLINK libspdk_accel_error.so 00:02:32.891 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.149 SYMLINK libspdk_accel_iaa.so 00:02:33.149 CC module/keyring/linux/keyring_rpc.o 00:02:33.149 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.149 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.149 LIB libspdk_keyring_linux.a 00:02:33.149 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.149 SO libspdk_keyring_linux.so.1.0 00:02:33.149 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.149 CC module/bdev/delay/vbdev_delay.o 00:02:33.149 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.149 SYMLINK libspdk_keyring_linux.so 00:02:33.149 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.149 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.149 CC module/bdev/error/vbdev_error.o 00:02:33.406 CC module/bdev/gpt/gpt.o 00:02:33.406 LIB libspdk_scheduler_gscheduler.a 00:02:33.406 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.406 LIB libspdk_fsdev_aio.a 00:02:33.406 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.406 LIB libspdk_sock_posix.a 00:02:33.406 SO libspdk_fsdev_aio.so.1.0 00:02:33.406 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.406 SO libspdk_sock_posix.so.6.0 00:02:33.406 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.406 LIB libspdk_blobfs_bdev.a 00:02:33.406 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.406 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.406 SYMLINK libspdk_fsdev_aio.so 00:02:33.406 SO libspdk_blobfs_bdev.so.6.0 00:02:33.406 SYMLINK libspdk_sock_posix.so 00:02:33.406 SYMLINK libspdk_blobfs_bdev.so 00:02:33.664 CC module/bdev/malloc/bdev_malloc.o 00:02:33.664 LIB libspdk_bdev_error.a 00:02:33.664 CC module/bdev/null/bdev_null.o 00:02:33.664 CC module/bdev/nvme/bdev_nvme.o 00:02:33.664 SO libspdk_bdev_error.so.6.0 00:02:33.664 LIB libspdk_bdev_delay.a 00:02:33.664 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.664 SO libspdk_bdev_delay.so.6.0 00:02:33.664 CC module/bdev/raid/bdev_raid.o 00:02:33.664 LIB libspdk_bdev_gpt.a 00:02:33.664 SYMLINK libspdk_bdev_error.so 00:02:33.664 SO libspdk_bdev_gpt.so.6.0 00:02:33.664 CC module/bdev/null/bdev_null_rpc.o 00:02:33.664 SYMLINK libspdk_bdev_delay.so 00:02:33.664 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.664 SYMLINK libspdk_bdev_gpt.so 00:02:33.922 LIB libspdk_bdev_lvol.a 00:02:33.922 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.922 SO libspdk_bdev_lvol.so.6.0 00:02:33.922 CC module/bdev/split/vbdev_split.o 00:02:33.922 LIB libspdk_bdev_null.a 00:02:33.922 SO libspdk_bdev_null.so.6.0 00:02:33.922 SYMLINK libspdk_bdev_lvol.so 00:02:33.922 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.922 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.922 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.922 SYMLINK libspdk_bdev_null.so 00:02:33.922 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.922 LIB libspdk_bdev_malloc.a 00:02:33.922 CC module/bdev/xnvme/bdev_xnvme.o 00:02:33.922 SO libspdk_bdev_malloc.so.6.0 00:02:33.922 LIB libspdk_bdev_passthru.a 00:02:34.180 SO libspdk_bdev_passthru.so.6.0 00:02:34.180 SYMLINK libspdk_bdev_malloc.so 00:02:34.180 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:34.180 CC module/bdev/raid/bdev_raid_rpc.o 00:02:34.180 LIB libspdk_bdev_split.a 00:02:34.180 CC module/bdev/nvme/nvme_rpc.o 00:02:34.180 SO libspdk_bdev_split.so.6.0 00:02:34.180 SYMLINK libspdk_bdev_passthru.so 00:02:34.180 CC module/bdev/raid/bdev_raid_sb.o 00:02:34.180 SYMLINK libspdk_bdev_split.so 00:02:34.180 CC module/bdev/raid/raid0.o 00:02:34.180 LIB libspdk_bdev_xnvme.a 00:02:34.180 LIB libspdk_bdev_zone_block.a 00:02:34.180 CC module/bdev/aio/bdev_aio.o 00:02:34.180 SO libspdk_bdev_xnvme.so.3.0 00:02:34.180 SO libspdk_bdev_zone_block.so.6.0 00:02:34.438 SYMLINK libspdk_bdev_xnvme.so 00:02:34.438 CC module/bdev/ftl/bdev_ftl.o 00:02:34.438 SYMLINK libspdk_bdev_zone_block.so 00:02:34.438 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.438 CC module/bdev/raid/raid1.o 00:02:34.438 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.438 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:34.438 CC module/bdev/raid/concat.o 00:02:34.438 CC module/bdev/nvme/bdev_mdns_client.o 00:02:34.438 LIB libspdk_bdev_aio.a 00:02:34.438 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.438 SO libspdk_bdev_aio.so.6.0 00:02:34.696 SYMLINK libspdk_bdev_aio.so 00:02:34.696 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.696 CC module/bdev/nvme/vbdev_opal.o 00:02:34.696 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.696 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:34.696 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.696 LIB libspdk_bdev_raid.a 00:02:34.696 LIB libspdk_bdev_ftl.a 00:02:34.696 SO libspdk_bdev_ftl.so.6.0 00:02:34.696 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.696 SO libspdk_bdev_raid.so.6.0 00:02:34.953 SYMLINK libspdk_bdev_ftl.so 00:02:34.953 LIB libspdk_bdev_iscsi.a 00:02:34.953 SYMLINK libspdk_bdev_raid.so 00:02:34.953 SO libspdk_bdev_iscsi.so.6.0 00:02:34.953 SYMLINK libspdk_bdev_iscsi.so 00:02:34.953 LIB libspdk_bdev_virtio.a 00:02:34.953 SO libspdk_bdev_virtio.so.6.0 00:02:35.211 SYMLINK libspdk_bdev_virtio.so 00:02:36.185 LIB libspdk_bdev_nvme.a 00:02:36.443 SO libspdk_bdev_nvme.so.7.1 00:02:36.443 SYMLINK libspdk_bdev_nvme.so 00:02:37.009 CC module/event/subsystems/sock/sock.o 00:02:37.009 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.009 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.009 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.009 CC module/event/subsystems/keyring/keyring.o 00:02:37.009 CC module/event/subsystems/vmd/vmd.o 00:02:37.009 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.009 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.009 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.009 LIB libspdk_event_vhost_blk.a 00:02:37.009 LIB libspdk_event_keyring.a 00:02:37.009 LIB libspdk_event_sock.a 00:02:37.009 LIB libspdk_event_fsdev.a 00:02:37.009 LIB libspdk_event_scheduler.a 00:02:37.009 LIB libspdk_event_vmd.a 00:02:37.009 SO libspdk_event_keyring.so.1.0 00:02:37.009 SO libspdk_event_vhost_blk.so.3.0 00:02:37.009 LIB libspdk_event_iobuf.a 00:02:37.009 SO libspdk_event_sock.so.5.0 00:02:37.009 SO libspdk_event_fsdev.so.1.0 00:02:37.009 SO libspdk_event_scheduler.so.4.0 00:02:37.009 SO libspdk_event_vmd.so.6.0 00:02:37.009 SO libspdk_event_iobuf.so.3.0 00:02:37.009 SYMLINK libspdk_event_keyring.so 00:02:37.009 SYMLINK libspdk_event_vhost_blk.so 00:02:37.009 SYMLINK libspdk_event_sock.so 00:02:37.009 SYMLINK libspdk_event_fsdev.so 00:02:37.009 SYMLINK libspdk_event_scheduler.so 00:02:37.009 SYMLINK libspdk_event_vmd.so 00:02:37.009 SYMLINK libspdk_event_iobuf.so 00:02:37.266 CC module/event/subsystems/accel/accel.o 00:02:37.523 LIB libspdk_event_accel.a 00:02:37.523 SO libspdk_event_accel.so.6.0 00:02:37.523 SYMLINK libspdk_event_accel.so 00:02:37.780 CC module/event/subsystems/bdev/bdev.o 00:02:37.780 LIB libspdk_event_bdev.a 00:02:37.780 SO libspdk_event_bdev.so.6.0 00:02:38.035 SYMLINK libspdk_event_bdev.so 00:02:38.035 CC module/event/subsystems/nbd/nbd.o 00:02:38.035 CC module/event/subsystems/ublk/ublk.o 00:02:38.035 CC module/event/subsystems/scsi/scsi.o 00:02:38.035 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.035 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.293 LIB libspdk_event_nbd.a 00:02:38.293 LIB libspdk_event_ublk.a 00:02:38.293 LIB libspdk_event_scsi.a 00:02:38.293 SO libspdk_event_nbd.so.6.0 00:02:38.293 SO libspdk_event_ublk.so.3.0 00:02:38.293 SO libspdk_event_scsi.so.6.0 00:02:38.293 SYMLINK libspdk_event_ublk.so 00:02:38.293 SYMLINK libspdk_event_scsi.so 00:02:38.293 SYMLINK libspdk_event_nbd.so 00:02:38.293 LIB libspdk_event_nvmf.a 00:02:38.293 SO libspdk_event_nvmf.so.6.0 00:02:38.293 SYMLINK libspdk_event_nvmf.so 00:02:38.551 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.551 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.551 LIB libspdk_event_vhost_scsi.a 00:02:38.551 LIB libspdk_event_iscsi.a 00:02:38.551 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.551 SO libspdk_event_iscsi.so.6.0 00:02:38.551 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.808 SYMLINK libspdk_event_iscsi.so 00:02:38.808 SO libspdk.so.6.0 00:02:38.808 SYMLINK libspdk.so 00:02:39.066 CC app/spdk_nvme_perf/perf.o 00:02:39.066 CC app/spdk_nvme_identify/identify.o 00:02:39.066 CC app/trace_record/trace_record.o 00:02:39.066 CXX app/trace/trace.o 00:02:39.066 CC app/spdk_lspci/spdk_lspci.o 00:02:39.066 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.066 CC app/spdk_tgt/spdk_tgt.o 00:02:39.066 CC app/nvmf_tgt/nvmf_main.o 00:02:39.066 CC examples/util/zipf/zipf.o 00:02:39.066 CC test/thread/poller_perf/poller_perf.o 00:02:39.066 LINK spdk_lspci 00:02:39.066 LINK spdk_trace_record 00:02:39.066 LINK spdk_tgt 00:02:39.066 LINK iscsi_tgt 00:02:39.323 LINK zipf 00:02:39.323 LINK poller_perf 00:02:39.323 LINK nvmf_tgt 00:02:39.323 LINK spdk_trace 00:02:39.323 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.323 CC app/spdk_top/spdk_top.o 00:02:39.630 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.630 CC app/spdk_dd/spdk_dd.o 00:02:39.630 CC examples/ioat/perf/perf.o 00:02:39.630 CC test/dma/test_dma/test_dma.o 00:02:39.630 LINK spdk_nvme_discover 00:02:39.630 CC examples/thread/thread/thread_ex.o 00:02:39.630 CC test/app/bdev_svc/bdev_svc.o 00:02:39.630 LINK interrupt_tgt 00:02:39.630 LINK ioat_perf 00:02:39.887 LINK spdk_nvme_identify 00:02:39.887 CC app/fio/nvme/fio_plugin.o 00:02:39.887 LINK bdev_svc 00:02:39.887 LINK thread 00:02:39.887 LINK spdk_dd 00:02:39.887 LINK spdk_nvme_perf 00:02:39.887 CC examples/ioat/verify/verify.o 00:02:39.887 CC app/fio/bdev/fio_plugin.o 00:02:39.887 LINK test_dma 00:02:40.144 TEST_HEADER include/spdk/accel.h 00:02:40.144 TEST_HEADER include/spdk/accel_module.h 00:02:40.144 TEST_HEADER include/spdk/assert.h 00:02:40.144 TEST_HEADER include/spdk/barrier.h 00:02:40.144 TEST_HEADER include/spdk/base64.h 00:02:40.144 TEST_HEADER include/spdk/bdev.h 00:02:40.144 TEST_HEADER include/spdk/bdev_module.h 00:02:40.144 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.144 TEST_HEADER include/spdk/bit_array.h 00:02:40.145 TEST_HEADER include/spdk/bit_pool.h 00:02:40.145 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.145 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.145 TEST_HEADER include/spdk/blobfs.h 00:02:40.145 TEST_HEADER include/spdk/blob.h 00:02:40.145 TEST_HEADER include/spdk/conf.h 00:02:40.145 TEST_HEADER include/spdk/config.h 00:02:40.145 TEST_HEADER include/spdk/cpuset.h 00:02:40.145 TEST_HEADER include/spdk/crc16.h 00:02:40.145 TEST_HEADER include/spdk/crc32.h 00:02:40.145 TEST_HEADER include/spdk/crc64.h 00:02:40.145 TEST_HEADER include/spdk/dif.h 00:02:40.145 CC examples/sock/hello_world/hello_sock.o 00:02:40.145 TEST_HEADER include/spdk/dma.h 00:02:40.145 TEST_HEADER include/spdk/endian.h 00:02:40.145 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.145 TEST_HEADER include/spdk/env.h 00:02:40.145 TEST_HEADER include/spdk/event.h 00:02:40.145 TEST_HEADER include/spdk/fd_group.h 00:02:40.145 TEST_HEADER include/spdk/fd.h 00:02:40.145 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:40.145 TEST_HEADER include/spdk/file.h 00:02:40.145 TEST_HEADER include/spdk/fsdev.h 00:02:40.145 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.145 TEST_HEADER include/spdk/ftl.h 00:02:40.145 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.145 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.145 TEST_HEADER include/spdk/hexlify.h 00:02:40.145 TEST_HEADER include/spdk/histogram_data.h 00:02:40.145 TEST_HEADER include/spdk/idxd.h 00:02:40.145 LINK verify 00:02:40.145 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.145 TEST_HEADER include/spdk/init.h 00:02:40.145 TEST_HEADER include/spdk/ioat.h 00:02:40.145 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.145 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.145 TEST_HEADER include/spdk/json.h 00:02:40.145 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.145 TEST_HEADER include/spdk/keyring.h 00:02:40.145 TEST_HEADER include/spdk/keyring_module.h 00:02:40.145 TEST_HEADER include/spdk/likely.h 00:02:40.145 TEST_HEADER include/spdk/log.h 00:02:40.145 CC test/event/event_perf/event_perf.o 00:02:40.145 TEST_HEADER include/spdk/lvol.h 00:02:40.145 TEST_HEADER include/spdk/md5.h 00:02:40.145 TEST_HEADER include/spdk/memory.h 00:02:40.145 TEST_HEADER include/spdk/mmio.h 00:02:40.145 TEST_HEADER include/spdk/nbd.h 00:02:40.145 TEST_HEADER include/spdk/net.h 00:02:40.145 TEST_HEADER include/spdk/notify.h 00:02:40.145 TEST_HEADER include/spdk/nvme.h 00:02:40.145 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.145 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.145 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.145 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.145 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.145 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.145 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.145 TEST_HEADER include/spdk/nvmf.h 00:02:40.145 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.145 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.145 TEST_HEADER include/spdk/opal.h 00:02:40.145 TEST_HEADER include/spdk/opal_spec.h 00:02:40.145 TEST_HEADER include/spdk/pci_ids.h 00:02:40.145 TEST_HEADER include/spdk/pipe.h 00:02:40.145 TEST_HEADER include/spdk/queue.h 00:02:40.145 TEST_HEADER include/spdk/reduce.h 00:02:40.145 TEST_HEADER include/spdk/rpc.h 00:02:40.145 TEST_HEADER include/spdk/scheduler.h 00:02:40.145 TEST_HEADER include/spdk/scsi.h 00:02:40.145 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.145 TEST_HEADER include/spdk/sock.h 00:02:40.145 TEST_HEADER include/spdk/stdinc.h 00:02:40.145 TEST_HEADER include/spdk/string.h 00:02:40.145 TEST_HEADER include/spdk/thread.h 00:02:40.145 TEST_HEADER include/spdk/trace.h 00:02:40.145 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.145 TEST_HEADER include/spdk/trace_parser.h 00:02:40.145 TEST_HEADER include/spdk/tree.h 00:02:40.145 TEST_HEADER include/spdk/ublk.h 00:02:40.145 TEST_HEADER include/spdk/util.h 00:02:40.145 TEST_HEADER include/spdk/uuid.h 00:02:40.145 TEST_HEADER include/spdk/version.h 00:02:40.145 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.145 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.145 TEST_HEADER include/spdk/vhost.h 00:02:40.145 TEST_HEADER include/spdk/vmd.h 00:02:40.145 TEST_HEADER include/spdk/xor.h 00:02:40.145 TEST_HEADER include/spdk/zipf.h 00:02:40.145 CXX test/cpp_headers/accel.o 00:02:40.145 LINK spdk_top 00:02:40.145 CC test/env/vtophys/vtophys.o 00:02:40.145 LINK event_perf 00:02:40.402 CC test/rpc_client/rpc_client_test.o 00:02:40.402 LINK hello_sock 00:02:40.402 LINK spdk_nvme 00:02:40.402 CXX test/cpp_headers/accel_module.o 00:02:40.402 LINK spdk_bdev 00:02:40.402 LINK vtophys 00:02:40.402 CC test/event/reactor/reactor.o 00:02:40.402 LINK nvme_fuzz 00:02:40.402 LINK rpc_client_test 00:02:40.402 CXX test/cpp_headers/assert.o 00:02:40.402 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.659 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.659 CC app/vhost/vhost.o 00:02:40.659 CC test/accel/dif/dif.o 00:02:40.659 LINK reactor 00:02:40.659 CXX test/cpp_headers/barrier.o 00:02:40.659 LINK lsvmd 00:02:40.659 CC examples/idxd/perf/perf.o 00:02:40.659 LINK mem_callbacks 00:02:40.659 LINK env_dpdk_post_init 00:02:40.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:40.659 LINK vhost 00:02:40.917 CXX test/cpp_headers/base64.o 00:02:40.917 CC test/event/reactor_perf/reactor_perf.o 00:02:40.917 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:40.917 CC examples/vmd/led/led.o 00:02:40.917 CC test/env/memory/memory_ut.o 00:02:40.917 CXX test/cpp_headers/bdev.o 00:02:40.917 CC examples/accel/perf/accel_perf.o 00:02:40.917 LINK reactor_perf 00:02:40.917 LINK idxd_perf 00:02:40.917 LINK led 00:02:41.174 CC examples/blob/hello_world/hello_blob.o 00:02:41.174 LINK hello_fsdev 00:02:41.174 CXX test/cpp_headers/bdev_module.o 00:02:41.174 CXX test/cpp_headers/bdev_zone.o 00:02:41.174 CC test/event/app_repeat/app_repeat.o 00:02:41.174 CXX test/cpp_headers/bit_array.o 00:02:41.174 CC examples/nvme/hello_world/hello_world.o 00:02:41.431 LINK hello_blob 00:02:41.431 LINK dif 00:02:41.431 CC examples/nvme/reconnect/reconnect.o 00:02:41.431 LINK app_repeat 00:02:41.431 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.431 LINK accel_perf 00:02:41.431 CXX test/cpp_headers/bit_pool.o 00:02:41.431 LINK hello_world 00:02:41.687 CC examples/nvme/arbitration/arbitration.o 00:02:41.687 CXX test/cpp_headers/blob_bdev.o 00:02:41.687 CC examples/blob/cli/blobcli.o 00:02:41.687 CC test/event/scheduler/scheduler.o 00:02:41.687 CC examples/nvme/hotplug/hotplug.o 00:02:41.687 LINK reconnect 00:02:41.687 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.687 CC test/env/pci/pci_ut.o 00:02:41.687 LINK scheduler 00:02:41.944 LINK nvme_manage 00:02:41.944 LINK hotplug 00:02:41.944 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.944 LINK arbitration 00:02:41.944 CXX test/cpp_headers/blobfs.o 00:02:41.944 LINK memory_ut 00:02:41.944 LINK cmb_copy 00:02:41.944 CC examples/nvme/abort/abort.o 00:02:41.944 CXX test/cpp_headers/blob.o 00:02:42.201 LINK blobcli 00:02:42.201 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.201 LINK pci_ut 00:02:42.201 CC test/blobfs/mkfs/mkfs.o 00:02:42.201 CXX test/cpp_headers/conf.o 00:02:42.201 CXX test/cpp_headers/config.o 00:02:42.201 CC test/lvol/esnap/esnap.o 00:02:42.201 LINK pmr_persistence 00:02:42.201 LINK mkfs 00:02:42.201 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.463 CC examples/bdev/bdevperf/bdevperf.o 00:02:42.463 CXX test/cpp_headers/cpuset.o 00:02:42.463 LINK abort 00:02:42.463 CC test/app/histogram_perf/histogram_perf.o 00:02:42.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.463 CC test/app/jsoncat/jsoncat.o 00:02:42.463 LINK iscsi_fuzz 00:02:42.463 CXX test/cpp_headers/crc16.o 00:02:42.463 LINK hello_bdev 00:02:42.725 CC test/nvme/aer/aer.o 00:02:42.725 LINK histogram_perf 00:02:42.725 LINK jsoncat 00:02:42.725 CC test/nvme/reset/reset.o 00:02:42.725 CXX test/cpp_headers/crc32.o 00:02:42.725 CXX test/cpp_headers/crc64.o 00:02:42.725 CXX test/cpp_headers/dif.o 00:02:42.725 CXX test/cpp_headers/dma.o 00:02:42.725 CC test/app/stub/stub.o 00:02:42.725 CXX test/cpp_headers/endian.o 00:02:42.981 CC test/nvme/sgl/sgl.o 00:02:42.981 LINK aer 00:02:42.981 LINK reset 00:02:42.981 LINK vhost_fuzz 00:02:42.981 CC test/bdev/bdevio/bdevio.o 00:02:42.981 CXX test/cpp_headers/env_dpdk.o 00:02:42.981 CXX test/cpp_headers/env.o 00:02:42.981 LINK stub 00:02:42.981 CXX test/cpp_headers/event.o 00:02:42.981 CXX test/cpp_headers/fd_group.o 00:02:42.981 CC test/nvme/e2edp/nvme_dp.o 00:02:42.981 CC test/nvme/overhead/overhead.o 00:02:43.239 CXX test/cpp_headers/fd.o 00:02:43.239 CXX test/cpp_headers/file.o 00:02:43.239 LINK sgl 00:02:43.239 CC test/nvme/err_injection/err_injection.o 00:02:43.239 LINK bdevperf 00:02:43.239 CC test/nvme/startup/startup.o 00:02:43.239 LINK bdevio 00:02:43.239 CXX test/cpp_headers/fsdev.o 00:02:43.239 CC test/nvme/reserve/reserve.o 00:02:43.239 LINK nvme_dp 00:02:43.496 LINK err_injection 00:02:43.496 LINK overhead 00:02:43.496 CC test/nvme/simple_copy/simple_copy.o 00:02:43.496 LINK startup 00:02:43.496 CXX test/cpp_headers/fsdev_module.o 00:02:43.496 CXX test/cpp_headers/ftl.o 00:02:43.496 CC test/nvme/connect_stress/connect_stress.o 00:02:43.496 CC examples/nvmf/nvmf/nvmf.o 00:02:43.496 LINK reserve 00:02:43.496 CC test/nvme/boot_partition/boot_partition.o 00:02:43.496 CC test/nvme/compliance/nvme_compliance.o 00:02:43.496 CXX test/cpp_headers/fuse_dispatcher.o 00:02:43.753 LINK simple_copy 00:02:43.753 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.753 LINK connect_stress 00:02:43.753 LINK boot_partition 00:02:43.753 CXX test/cpp_headers/gpt_spec.o 00:02:43.753 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.753 CC test/nvme/fdp/fdp.o 00:02:43.753 CC test/nvme/cuse/cuse.o 00:02:43.753 LINK nvmf 00:02:43.753 CXX test/cpp_headers/hexlify.o 00:02:43.753 CXX test/cpp_headers/histogram_data.o 00:02:43.753 CXX test/cpp_headers/idxd.o 00:02:44.010 LINK fused_ordering 00:02:44.010 LINK nvme_compliance 00:02:44.010 LINK doorbell_aers 00:02:44.010 CXX test/cpp_headers/idxd_spec.o 00:02:44.010 CXX test/cpp_headers/init.o 00:02:44.010 CXX test/cpp_headers/ioat.o 00:02:44.010 CXX test/cpp_headers/ioat_spec.o 00:02:44.010 CXX test/cpp_headers/iscsi_spec.o 00:02:44.010 CXX test/cpp_headers/json.o 00:02:44.010 CXX test/cpp_headers/jsonrpc.o 00:02:44.010 CXX test/cpp_headers/keyring.o 00:02:44.010 LINK fdp 00:02:44.010 CXX test/cpp_headers/keyring_module.o 00:02:44.267 CXX test/cpp_headers/likely.o 00:02:44.267 CXX test/cpp_headers/log.o 00:02:44.267 CXX test/cpp_headers/lvol.o 00:02:44.267 CXX test/cpp_headers/md5.o 00:02:44.267 CXX test/cpp_headers/memory.o 00:02:44.267 CXX test/cpp_headers/mmio.o 00:02:44.267 CXX test/cpp_headers/nbd.o 00:02:44.267 CXX test/cpp_headers/net.o 00:02:44.267 CXX test/cpp_headers/notify.o 00:02:44.267 CXX test/cpp_headers/nvme.o 00:02:44.267 CXX test/cpp_headers/nvme_intel.o 00:02:44.267 CXX test/cpp_headers/nvme_ocssd.o 00:02:44.267 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:44.267 CXX test/cpp_headers/nvme_spec.o 00:02:44.267 CXX test/cpp_headers/nvme_zns.o 00:02:44.525 CXX test/cpp_headers/nvmf_cmd.o 00:02:44.525 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:44.525 CXX test/cpp_headers/nvmf.o 00:02:44.525 CXX test/cpp_headers/nvmf_spec.o 00:02:44.525 CXX test/cpp_headers/nvmf_transport.o 00:02:44.525 CXX test/cpp_headers/opal.o 00:02:44.525 CXX test/cpp_headers/opal_spec.o 00:02:44.525 CXX test/cpp_headers/pci_ids.o 00:02:44.525 CXX test/cpp_headers/pipe.o 00:02:44.525 CXX test/cpp_headers/queue.o 00:02:44.525 CXX test/cpp_headers/reduce.o 00:02:44.525 CXX test/cpp_headers/rpc.o 00:02:44.525 CXX test/cpp_headers/scheduler.o 00:02:44.525 CXX test/cpp_headers/scsi.o 00:02:44.525 CXX test/cpp_headers/scsi_spec.o 00:02:44.525 CXX test/cpp_headers/sock.o 00:02:44.782 CXX test/cpp_headers/stdinc.o 00:02:44.782 CXX test/cpp_headers/string.o 00:02:44.782 CXX test/cpp_headers/thread.o 00:02:44.782 CXX test/cpp_headers/trace.o 00:02:44.782 CXX test/cpp_headers/trace_parser.o 00:02:44.782 CXX test/cpp_headers/tree.o 00:02:44.782 CXX test/cpp_headers/ublk.o 00:02:44.782 CXX test/cpp_headers/util.o 00:02:44.782 CXX test/cpp_headers/uuid.o 00:02:44.782 CXX test/cpp_headers/version.o 00:02:44.782 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.782 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.782 CXX test/cpp_headers/vhost.o 00:02:44.782 CXX test/cpp_headers/vmd.o 00:02:44.782 CXX test/cpp_headers/xor.o 00:02:45.040 CXX test/cpp_headers/zipf.o 00:02:45.040 LINK cuse 00:02:47.574 LINK esnap 00:02:47.574 ************************************ 00:02:47.574 END TEST make 00:02:47.574 ************************************ 00:02:47.574 00:02:47.574 real 1m12.980s 00:02:47.574 user 6m51.223s 00:02:47.574 sys 1m14.643s 00:02:47.574 06:01:07 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:47.574 06:01:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:47.574 06:01:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:47.574 06:01:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:47.574 06:01:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:47.574 06:01:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.574 06:01:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:47.574 06:01:07 -- pm/common@44 -- $ pid=5070 00:02:47.574 06:01:07 -- pm/common@50 -- $ kill -TERM 5070 00:02:47.574 06:01:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.574 06:01:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:47.574 06:01:07 -- pm/common@44 -- $ pid=5071 00:02:47.574 06:01:07 -- pm/common@50 -- $ kill -TERM 5071 00:02:47.574 06:01:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:47.574 06:01:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.574 06:01:07 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:47.574 06:01:07 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:47.574 06:01:07 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:47.832 06:01:07 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:47.832 06:01:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:47.832 06:01:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:47.832 06:01:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:47.832 06:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.832 06:01:07 -- scripts/common.sh@336 -- # read -ra ver1 00:02:47.832 06:01:07 -- scripts/common.sh@337 -- # IFS=.-: 00:02:47.832 06:01:07 -- scripts/common.sh@337 -- # read -ra ver2 00:02:47.832 06:01:07 -- scripts/common.sh@338 -- # local 'op=<' 00:02:47.832 06:01:07 -- scripts/common.sh@340 -- # ver1_l=2 00:02:47.832 06:01:07 -- scripts/common.sh@341 -- # ver2_l=1 00:02:47.832 06:01:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:47.832 06:01:07 -- scripts/common.sh@344 -- # case "$op" in 00:02:47.832 06:01:07 -- scripts/common.sh@345 -- # : 1 00:02:47.832 06:01:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:47.832 06:01:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.832 06:01:07 -- scripts/common.sh@365 -- # decimal 1 00:02:47.832 06:01:07 -- scripts/common.sh@353 -- # local d=1 00:02:47.832 06:01:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.832 06:01:07 -- scripts/common.sh@355 -- # echo 1 00:02:47.832 06:01:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:47.832 06:01:07 -- scripts/common.sh@366 -- # decimal 2 00:02:47.832 06:01:07 -- scripts/common.sh@353 -- # local d=2 00:02:47.832 06:01:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.832 06:01:07 -- scripts/common.sh@355 -- # echo 2 00:02:47.832 06:01:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:47.832 06:01:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:47.832 06:01:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:47.832 06:01:07 -- scripts/common.sh@368 -- # return 0 00:02:47.832 06:01:07 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.832 06:01:07 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.832 --rc genhtml_branch_coverage=1 00:02:47.832 --rc genhtml_function_coverage=1 00:02:47.832 --rc genhtml_legend=1 00:02:47.832 --rc geninfo_all_blocks=1 00:02:47.832 --rc geninfo_unexecuted_blocks=1 00:02:47.832 00:02:47.832 ' 00:02:47.832 06:01:07 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.832 --rc genhtml_branch_coverage=1 00:02:47.832 --rc genhtml_function_coverage=1 00:02:47.832 --rc genhtml_legend=1 00:02:47.832 --rc geninfo_all_blocks=1 00:02:47.832 --rc geninfo_unexecuted_blocks=1 00:02:47.832 00:02:47.832 ' 00:02:47.832 06:01:07 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:47.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.832 --rc genhtml_branch_coverage=1 00:02:47.832 --rc genhtml_function_coverage=1 00:02:47.832 --rc genhtml_legend=1 00:02:47.832 --rc geninfo_all_blocks=1 00:02:47.832 --rc geninfo_unexecuted_blocks=1 00:02:47.832 00:02:47.833 ' 00:02:47.833 06:01:07 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:47.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.833 --rc genhtml_branch_coverage=1 00:02:47.833 --rc genhtml_function_coverage=1 00:02:47.833 --rc genhtml_legend=1 00:02:47.833 --rc geninfo_all_blocks=1 00:02:47.833 --rc geninfo_unexecuted_blocks=1 00:02:47.833 00:02:47.833 ' 00:02:47.833 06:01:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:47.833 06:01:07 -- nvmf/common.sh@7 -- # uname -s 00:02:47.833 06:01:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.833 06:01:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.833 06:01:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.833 06:01:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.833 06:01:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.833 06:01:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.833 06:01:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.833 06:01:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.833 06:01:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.833 06:01:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.833 06:01:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:02:47.833 06:01:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:02:47.833 06:01:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.833 06:01:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.833 06:01:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:47.833 06:01:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.833 06:01:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:47.833 06:01:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.833 06:01:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.833 06:01:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.833 06:01:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.833 06:01:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 06:01:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 06:01:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 06:01:07 -- paths/export.sh@5 -- # export PATH 00:02:47.833 06:01:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 06:01:07 -- nvmf/common.sh@51 -- # : 0 00:02:47.833 06:01:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.833 06:01:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.833 06:01:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.833 06:01:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.833 06:01:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.833 06:01:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.833 06:01:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.833 06:01:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.833 06:01:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.833 06:01:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.833 06:01:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.833 06:01:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.833 06:01:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.833 06:01:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:47.833 06:01:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.833 06:01:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:47.833 06:01:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.833 06:01:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.833 06:01:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.833 06:01:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54317 00:02:47.833 06:01:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.833 06:01:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.833 06:01:07 -- pm/common@17 -- # local monitor 00:02:47.833 06:01:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.833 06:01:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.833 06:01:07 -- pm/common@25 -- # sleep 1 00:02:47.833 06:01:07 -- pm/common@21 -- # date +%s 00:02:47.833 06:01:07 -- pm/common@21 -- # date +%s 00:02:47.833 06:01:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732082467 00:02:47.833 06:01:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732082467 00:02:47.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732082467_collect-vmstat.pm.log 00:02:47.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732082467_collect-cpu-load.pm.log 00:02:48.842 06:01:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.842 06:01:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.842 06:01:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:48.842 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:02:48.842 06:01:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.842 06:01:08 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:48.842 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:02:48.842 06:01:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:48.842 06:01:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:48.842 06:01:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:48.842 06:01:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:48.842 06:01:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:48.842 06:01:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.842 06:01:08 -- common/autotest_common.sh@1455 -- # uname 00:02:48.842 06:01:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.842 06:01:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.842 06:01:08 -- common/autotest_common.sh@1475 -- # uname 00:02:48.842 06:01:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.842 06:01:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.842 06:01:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:48.842 lcov: LCOV version 1.15 00:02:48.842 06:01:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:03.698 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.579 06:01:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:18.579 06:01:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:18.579 06:01:35 -- common/autotest_common.sh@10 -- # set +x 00:03:18.579 06:01:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:18.579 06:01:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:18.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:18.579 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:18.579 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:18.579 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:18.579 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:18.579 06:01:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:18.579 06:01:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:18.579 06:01:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:18.579 06:01:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:18.579 06:01:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:03:18.579 06:01:36 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:03:18.579 06:01:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:18.579 06:01:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:18.579 06:01:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:18.579 06:01:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.579 06:01:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.579 06:01:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:18.579 06:01:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:18.579 06:01:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:18.579 No valid GPT data, bailing 00:03:18.579 06:01:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:18.579 06:01:36 -- scripts/common.sh@394 -- # pt= 00:03:18.579 06:01:36 -- scripts/common.sh@395 -- # return 1 00:03:18.579 06:01:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:18.579 1+0 records in 00:03:18.579 1+0 records out 00:03:18.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0076484 s, 137 MB/s 00:03:18.579 06:01:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.579 06:01:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.579 06:01:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:18.579 06:01:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:18.579 06:01:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:18.579 No valid GPT data, bailing 00:03:18.579 06:01:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:18.579 06:01:36 -- scripts/common.sh@394 -- # pt= 00:03:18.579 06:01:36 -- scripts/common.sh@395 -- # return 1 00:03:18.579 06:01:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:18.579 1+0 records in 00:03:18.579 1+0 records out 00:03:18.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373825 s, 280 MB/s 00:03:18.580 06:01:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.580 06:01:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.580 06:01:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:18.580 06:01:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:18.580 06:01:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:18.580 No valid GPT data, bailing 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # pt= 00:03:18.580 06:01:36 -- scripts/common.sh@395 -- # return 1 00:03:18.580 06:01:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:18.580 1+0 records in 00:03:18.580 1+0 records out 00:03:18.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00276289 s, 380 MB/s 00:03:18.580 06:01:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.580 06:01:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.580 06:01:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:18.580 06:01:36 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:18.580 06:01:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:18.580 No valid GPT data, bailing 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # pt= 00:03:18.580 06:01:36 -- scripts/common.sh@395 -- # return 1 00:03:18.580 06:01:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:18.580 1+0 records in 00:03:18.580 1+0 records out 00:03:18.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038918 s, 269 MB/s 00:03:18.580 06:01:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.580 06:01:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.580 06:01:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:03:18.580 06:01:36 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:03:18.580 06:01:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:18.580 No valid GPT data, bailing 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:18.580 06:01:36 -- scripts/common.sh@394 -- # pt= 00:03:18.580 06:01:36 -- scripts/common.sh@395 -- # return 1 00:03:18.580 06:01:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:18.580 1+0 records in 00:03:18.580 1+0 records out 00:03:18.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00328857 s, 319 MB/s 00:03:18.580 06:01:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.580 06:01:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.580 06:01:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:03:18.580 06:01:37 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:03:18.580 06:01:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:18.580 No valid GPT data, bailing 00:03:18.580 06:01:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:18.580 06:01:37 -- scripts/common.sh@394 -- # pt= 00:03:18.580 06:01:37 -- scripts/common.sh@395 -- # return 1 00:03:18.580 06:01:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:18.580 1+0 records in 00:03:18.580 1+0 records out 00:03:18.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037996 s, 276 MB/s 00:03:18.580 06:01:37 -- spdk/autotest.sh@105 -- # sync 00:03:18.580 06:01:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:18.580 06:01:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:18.580 06:01:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.145 06:01:38 -- spdk/autotest.sh@111 -- # uname -s 00:03:19.145 06:01:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:19.145 06:01:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:19.145 06:01:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:19.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:19.966 Hugepages 00:03:19.966 node hugesize free / total 00:03:19.966 node0 1048576kB 0 / 0 00:03:19.966 node0 2048kB 0 / 0 00:03:19.966 00:03:19.966 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.966 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:19.966 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:19.966 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:19.966 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:19.966 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:19.966 06:01:39 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.966 06:01:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.966 06:01:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.966 06:01:39 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:20.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:21.093 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.093 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.093 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.093 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.093 06:01:40 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:22.031 06:01:41 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:22.031 06:01:41 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:22.031 06:01:41 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.031 06:01:41 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:22.031 06:01:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:22.031 06:01:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:22.031 06:01:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.031 06:01:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:22.031 06:01:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:22.031 06:01:41 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:22.031 06:01:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:22.031 06:01:41 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:22.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:22.546 Waiting for block devices as requested 00:03:22.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:22.546 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:22.804 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:22.804 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:28.062 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:28.062 06:01:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:28.062 06:01:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:28.062 06:01:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:28.062 06:01:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1541 -- # continue 00:03:28.062 06:01:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:28.062 06:01:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:28.062 06:01:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:28.062 06:01:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1541 -- # continue 00:03:28.062 06:01:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:28.062 06:01:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:28.062 06:01:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:28.062 06:01:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:03:28.062 06:01:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:28.063 06:01:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:28.063 06:01:47 -- common/autotest_common.sh@1541 -- # continue 00:03:28.063 06:01:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:28.063 06:01:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:28.063 06:01:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:03:28.063 06:01:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:28.063 06:01:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:03:28.063 06:01:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:28.063 06:01:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:28.063 06:01:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:28.063 06:01:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:28.063 06:01:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:28.063 06:01:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:28.063 06:01:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:28.063 06:01:47 -- common/autotest_common.sh@1541 -- # continue 00:03:28.063 06:01:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:28.063 06:01:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:28.063 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:03:28.063 06:01:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:28.063 06:01:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:28.063 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:03:28.063 06:01:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:28.911 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:28.911 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:28.911 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:28.911 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:28.911 06:01:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:28.911 06:01:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:28.911 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.911 06:01:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:28.911 06:01:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:28.911 06:01:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:28.911 06:01:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:28.911 06:01:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:28.911 06:01:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:28.911 06:01:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:28.911 06:01:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:28.911 06:01:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:28.911 06:01:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:28.911 06:01:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.911 06:01:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:28.911 06:01:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:28.911 06:01:48 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:28.911 06:01:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:28.911 06:01:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:28.911 06:01:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:28.911 06:01:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:28.911 06:01:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:28.911 06:01:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:28.911 06:01:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:28.911 06:01:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:28.911 06:01:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:28.911 06:01:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:29.169 06:01:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:29.169 06:01:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:29.169 06:01:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:29.169 06:01:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:29.169 06:01:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:29.169 06:01:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:29.169 06:01:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:29.169 06:01:48 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:29.169 06:01:48 -- common/autotest_common.sh@1570 -- # return 0 00:03:29.169 06:01:48 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:29.169 06:01:48 -- common/autotest_common.sh@1578 -- # return 0 00:03:29.169 06:01:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:29.169 06:01:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:29.169 06:01:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:29.169 06:01:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:29.169 06:01:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:29.169 06:01:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.169 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.169 06:01:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:29.169 06:01:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:29.169 06:01:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:29.169 06:01:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:29.169 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.169 ************************************ 00:03:29.169 START TEST env 00:03:29.169 ************************************ 00:03:29.169 06:01:48 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:29.169 * Looking for test storage... 00:03:29.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:29.169 06:01:48 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:29.169 06:01:48 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:29.169 06:01:48 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:29.169 06:01:48 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:29.169 06:01:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.169 06:01:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.169 06:01:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.169 06:01:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.169 06:01:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.169 06:01:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.169 06:01:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.169 06:01:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.169 06:01:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.169 06:01:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.169 06:01:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.169 06:01:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:29.169 06:01:48 env -- scripts/common.sh@345 -- # : 1 00:03:29.169 06:01:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.169 06:01:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.170 06:01:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:29.170 06:01:48 env -- scripts/common.sh@353 -- # local d=1 00:03:29.170 06:01:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.170 06:01:48 env -- scripts/common.sh@355 -- # echo 1 00:03:29.170 06:01:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.170 06:01:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:29.170 06:01:48 env -- scripts/common.sh@353 -- # local d=2 00:03:29.170 06:01:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.170 06:01:48 env -- scripts/common.sh@355 -- # echo 2 00:03:29.170 06:01:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.170 06:01:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.170 06:01:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.170 06:01:48 env -- scripts/common.sh@368 -- # return 0 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.170 --rc genhtml_branch_coverage=1 00:03:29.170 --rc genhtml_function_coverage=1 00:03:29.170 --rc genhtml_legend=1 00:03:29.170 --rc geninfo_all_blocks=1 00:03:29.170 --rc geninfo_unexecuted_blocks=1 00:03:29.170 00:03:29.170 ' 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.170 --rc genhtml_branch_coverage=1 00:03:29.170 --rc genhtml_function_coverage=1 00:03:29.170 --rc genhtml_legend=1 00:03:29.170 --rc geninfo_all_blocks=1 00:03:29.170 --rc geninfo_unexecuted_blocks=1 00:03:29.170 00:03:29.170 ' 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.170 --rc genhtml_branch_coverage=1 00:03:29.170 --rc genhtml_function_coverage=1 00:03:29.170 --rc genhtml_legend=1 00:03:29.170 --rc geninfo_all_blocks=1 00:03:29.170 --rc geninfo_unexecuted_blocks=1 00:03:29.170 00:03:29.170 ' 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.170 --rc genhtml_branch_coverage=1 00:03:29.170 --rc genhtml_function_coverage=1 00:03:29.170 --rc genhtml_legend=1 00:03:29.170 --rc geninfo_all_blocks=1 00:03:29.170 --rc geninfo_unexecuted_blocks=1 00:03:29.170 00:03:29.170 ' 00:03:29.170 06:01:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:29.170 06:01:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:29.170 06:01:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.170 ************************************ 00:03:29.170 START TEST env_memory 00:03:29.170 ************************************ 00:03:29.170 06:01:48 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:29.170 00:03:29.170 00:03:29.170 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.170 http://cunit.sourceforge.net/ 00:03:29.170 00:03:29.170 00:03:29.170 Suite: memory 00:03:29.170 Test: alloc and free memory map ...[2024-11-20 06:01:48.743883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:29.170 passed 00:03:29.170 Test: mem map translation ...[2024-11-20 06:01:48.773852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:29.170 [2024-11-20 06:01:48.773981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:29.170 [2024-11-20 06:01:48.774073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:29.170 [2024-11-20 06:01:48.774105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:29.427 passed 00:03:29.427 Test: mem map registration ...[2024-11-20 06:01:48.837244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:29.427 [2024-11-20 06:01:48.837513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:29.427 passed 00:03:29.427 Test: mem map adjacent registrations ...passed 00:03:29.427 00:03:29.427 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.427 suites 1 1 n/a 0 0 00:03:29.427 tests 4 4 4 0 0 00:03:29.427 asserts 152 152 152 0 n/a 00:03:29.427 00:03:29.427 Elapsed time = 0.268 seconds 00:03:29.427 ************************************ 00:03:29.427 END TEST env_memory 00:03:29.427 ************************************ 00:03:29.427 00:03:29.427 real 0m0.303s 00:03:29.427 user 0m0.273s 00:03:29.427 sys 0m0.020s 00:03:29.427 06:01:49 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:29.427 06:01:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:29.427 06:01:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:29.427 06:01:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:29.427 06:01:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:29.427 06:01:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.427 ************************************ 00:03:29.427 START TEST env_vtophys 00:03:29.427 ************************************ 00:03:29.427 06:01:49 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:29.687 EAL: lib.eal log level changed from notice to debug 00:03:29.687 EAL: Detected lcore 0 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 1 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 2 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 3 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 4 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 5 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 6 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 7 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 8 as core 0 on socket 0 00:03:29.687 EAL: Detected lcore 9 as core 0 on socket 0 00:03:29.687 EAL: Maximum logical cores by configuration: 128 00:03:29.687 EAL: Detected CPU lcores: 10 00:03:29.687 EAL: Detected NUMA nodes: 1 00:03:29.687 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:29.687 EAL: Detected shared linkage of DPDK 00:03:29.687 EAL: No shared files mode enabled, IPC will be disabled 00:03:29.687 EAL: Selected IOVA mode 'PA' 00:03:29.687 EAL: Probing VFIO support... 00:03:29.687 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:29.687 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:29.687 EAL: Ask a virtual area of 0x2e000 bytes 00:03:29.687 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:29.687 EAL: Setting up physically contiguous memory... 00:03:29.687 EAL: Setting maximum number of open files to 524288 00:03:29.687 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:29.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:29.687 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.688 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:29.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.688 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.688 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:29.688 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:29.688 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.688 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:29.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.688 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.688 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:29.688 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:29.688 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.688 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:29.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.688 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.688 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:29.688 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:29.688 EAL: Ask a virtual area of 0x61000 bytes 00:03:29.688 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:29.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:29.688 EAL: Ask a virtual area of 0x400000000 bytes 00:03:29.688 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:29.688 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:29.688 EAL: Hugepages will be freed exactly as allocated. 00:03:29.688 EAL: No shared files mode enabled, IPC is disabled 00:03:29.688 EAL: No shared files mode enabled, IPC is disabled 00:03:29.688 EAL: TSC frequency is ~2600000 KHz 00:03:29.688 EAL: Main lcore 0 is ready (tid=7f5de8ec2a40;cpuset=[0]) 00:03:29.688 EAL: Trying to obtain current memory policy. 00:03:29.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:29.688 EAL: Restoring previous memory policy: 0 00:03:29.688 EAL: request: mp_malloc_sync 00:03:29.688 EAL: No shared files mode enabled, IPC is disabled 00:03:29.688 EAL: Heap on socket 0 was expanded by 2MB 00:03:29.688 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:29.688 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:29.688 EAL: Mem event callback 'spdk:(nil)' registered 00:03:29.688 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:29.947 00:03:29.947 00:03:29.947 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.947 http://cunit.sourceforge.net/ 00:03:29.947 00:03:29.947 00:03:29.947 Suite: components_suite 00:03:30.205 Test: vtophys_malloc_test ...passed 00:03:30.205 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:30.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.205 EAL: Restoring previous memory policy: 4 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was expanded by 4MB 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was shrunk by 4MB 00:03:30.205 EAL: Trying to obtain current memory policy. 00:03:30.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.205 EAL: Restoring previous memory policy: 4 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was expanded by 6MB 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was shrunk by 6MB 00:03:30.205 EAL: Trying to obtain current memory policy. 00:03:30.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.205 EAL: Restoring previous memory policy: 4 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was expanded by 10MB 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was shrunk by 10MB 00:03:30.205 EAL: Trying to obtain current memory policy. 00:03:30.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.205 EAL: Restoring previous memory policy: 4 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was expanded by 18MB 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was shrunk by 18MB 00:03:30.205 EAL: Trying to obtain current memory policy. 00:03:30.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.205 EAL: Restoring previous memory policy: 4 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was expanded by 34MB 00:03:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.205 EAL: request: mp_malloc_sync 00:03:30.205 EAL: No shared files mode enabled, IPC is disabled 00:03:30.205 EAL: Heap on socket 0 was shrunk by 34MB 00:03:30.519 EAL: Trying to obtain current memory policy. 00:03:30.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.519 EAL: Restoring previous memory policy: 4 00:03:30.519 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.519 EAL: request: mp_malloc_sync 00:03:30.519 EAL: No shared files mode enabled, IPC is disabled 00:03:30.519 EAL: Heap on socket 0 was expanded by 66MB 00:03:30.519 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.519 EAL: request: mp_malloc_sync 00:03:30.519 EAL: No shared files mode enabled, IPC is disabled 00:03:30.519 EAL: Heap on socket 0 was shrunk by 66MB 00:03:30.519 EAL: Trying to obtain current memory policy. 00:03:30.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.519 EAL: Restoring previous memory policy: 4 00:03:30.519 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.519 EAL: request: mp_malloc_sync 00:03:30.519 EAL: No shared files mode enabled, IPC is disabled 00:03:30.519 EAL: Heap on socket 0 was expanded by 130MB 00:03:30.779 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.779 EAL: request: mp_malloc_sync 00:03:30.779 EAL: No shared files mode enabled, IPC is disabled 00:03:30.779 EAL: Heap on socket 0 was shrunk by 130MB 00:03:30.779 EAL: Trying to obtain current memory policy. 00:03:30.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:30.779 EAL: Restoring previous memory policy: 4 00:03:30.779 EAL: Calling mem event callback 'spdk:(nil)' 00:03:30.779 EAL: request: mp_malloc_sync 00:03:30.779 EAL: No shared files mode enabled, IPC is disabled 00:03:30.779 EAL: Heap on socket 0 was expanded by 258MB 00:03:31.038 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.296 EAL: request: mp_malloc_sync 00:03:31.296 EAL: No shared files mode enabled, IPC is disabled 00:03:31.296 EAL: Heap on socket 0 was shrunk by 258MB 00:03:31.556 EAL: Trying to obtain current memory policy. 00:03:31.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.556 EAL: Restoring previous memory policy: 4 00:03:31.556 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.556 EAL: request: mp_malloc_sync 00:03:31.556 EAL: No shared files mode enabled, IPC is disabled 00:03:31.556 EAL: Heap on socket 0 was expanded by 514MB 00:03:32.128 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.128 EAL: request: mp_malloc_sync 00:03:32.128 EAL: No shared files mode enabled, IPC is disabled 00:03:32.128 EAL: Heap on socket 0 was shrunk by 514MB 00:03:32.692 EAL: Trying to obtain current memory policy. 00:03:32.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.948 EAL: Restoring previous memory policy: 4 00:03:32.948 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.948 EAL: request: mp_malloc_sync 00:03:32.948 EAL: No shared files mode enabled, IPC is disabled 00:03:32.948 EAL: Heap on socket 0 was expanded by 1026MB 00:03:33.880 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.067 passed 00:03:35.067 00:03:35.067 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.067 suites 1 1 n/a 0 0 00:03:35.067 tests 2 2 2 0 0 00:03:35.067 asserts 5831 5831 5831 0 n/a 00:03:35.067 00:03:35.067 Elapsed time = 5.225 seconds 00:03:35.067 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.067 EAL: request: mp_malloc_sync 00:03:35.067 EAL: No shared files mode enabled, IPC is disabled 00:03:35.067 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.067 EAL: No shared files mode enabled, IPC is disabled 00:03:35.067 EAL: No shared files mode enabled, IPC is disabled 00:03:35.067 EAL: No shared files mode enabled, IPC is disabled 00:03:35.405 00:03:35.405 real 0m5.663s 00:03:35.405 user 0m4.789s 00:03:35.405 sys 0m0.711s 00:03:35.405 06:01:54 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.405 ************************************ 00:03:35.405 END TEST env_vtophys 00:03:35.405 ************************************ 00:03:35.405 06:01:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.405 06:01:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:35.405 06:01:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.405 06:01:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.405 06:01:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.405 ************************************ 00:03:35.405 START TEST env_pci 00:03:35.405 ************************************ 00:03:35.405 06:01:54 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:35.405 00:03:35.405 00:03:35.405 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.405 http://cunit.sourceforge.net/ 00:03:35.405 00:03:35.405 00:03:35.405 Suite: pci 00:03:35.405 Test: pci_hook ...[2024-11-20 06:01:54.764837] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57060 has claimed it 00:03:35.405 EAL: Cannot find device (10000:00:01.0) 00:03:35.405 passed 00:03:35.405 00:03:35.405 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.405 suites 1 1 n/a 0 0 00:03:35.405 tests 1 1 1 0 0 00:03:35.405 asserts 25 25 25 0 n/a 00:03:35.405 00:03:35.405 Elapsed time = 0.006 seconds 00:03:35.405 EAL: Failed to attach device on primary process 00:03:35.405 00:03:35.405 real 0m0.067s 00:03:35.405 user 0m0.025s 00:03:35.405 sys 0m0.041s 00:03:35.405 06:01:54 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.405 06:01:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.405 ************************************ 00:03:35.405 END TEST env_pci 00:03:35.405 ************************************ 00:03:35.405 06:01:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.405 06:01:54 env -- env/env.sh@15 -- # uname 00:03:35.405 06:01:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.405 06:01:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.405 06:01:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.405 06:01:54 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:35.405 06:01:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.405 06:01:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.405 ************************************ 00:03:35.405 START TEST env_dpdk_post_init 00:03:35.405 ************************************ 00:03:35.405 06:01:54 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.405 EAL: Detected CPU lcores: 10 00:03:35.405 EAL: Detected NUMA nodes: 1 00:03:35.405 EAL: Detected shared linkage of DPDK 00:03:35.405 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.405 EAL: Selected IOVA mode 'PA' 00:03:35.663 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.663 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:35.663 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:35.663 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:35.663 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:35.663 Starting DPDK initialization... 00:03:35.663 Starting SPDK post initialization... 00:03:35.663 SPDK NVMe probe 00:03:35.663 Attaching to 0000:00:10.0 00:03:35.663 Attaching to 0000:00:11.0 00:03:35.663 Attaching to 0000:00:12.0 00:03:35.663 Attaching to 0000:00:13.0 00:03:35.663 Attached to 0000:00:13.0 00:03:35.663 Attached to 0000:00:10.0 00:03:35.663 Attached to 0000:00:11.0 00:03:35.663 Attached to 0000:00:12.0 00:03:35.663 Cleaning up... 00:03:35.663 00:03:35.663 real 0m0.246s 00:03:35.663 user 0m0.082s 00:03:35.663 sys 0m0.066s 00:03:35.663 06:01:55 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.663 06:01:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:35.663 ************************************ 00:03:35.663 END TEST env_dpdk_post_init 00:03:35.663 ************************************ 00:03:35.663 06:01:55 env -- env/env.sh@26 -- # uname 00:03:35.663 06:01:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:35.663 06:01:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:35.663 06:01:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.663 06:01:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.663 06:01:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.663 ************************************ 00:03:35.663 START TEST env_mem_callbacks 00:03:35.663 ************************************ 00:03:35.663 06:01:55 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:35.663 EAL: Detected CPU lcores: 10 00:03:35.663 EAL: Detected NUMA nodes: 1 00:03:35.663 EAL: Detected shared linkage of DPDK 00:03:35.663 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.663 EAL: Selected IOVA mode 'PA' 00:03:35.920 00:03:35.920 00:03:35.920 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.920 http://cunit.sourceforge.net/ 00:03:35.920 00:03:35.920 00:03:35.920 Suite: memory 00:03:35.920 Test: test ... 00:03:35.920 register 0x200000200000 2097152 00:03:35.920 malloc 3145728 00:03:35.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.920 register 0x200000400000 4194304 00:03:35.920 buf 0x2000004fffc0 len 3145728 PASSED 00:03:35.920 malloc 64 00:03:35.920 buf 0x2000004ffec0 len 64 PASSED 00:03:35.920 malloc 4194304 00:03:35.920 register 0x200000800000 6291456 00:03:35.920 buf 0x2000009fffc0 len 4194304 PASSED 00:03:35.920 free 0x2000004fffc0 3145728 00:03:35.920 free 0x2000004ffec0 64 00:03:35.920 unregister 0x200000400000 4194304 PASSED 00:03:35.920 free 0x2000009fffc0 4194304 00:03:35.920 unregister 0x200000800000 6291456 PASSED 00:03:35.920 malloc 8388608 00:03:35.920 register 0x200000400000 10485760 00:03:35.920 buf 0x2000005fffc0 len 8388608 PASSED 00:03:35.920 free 0x2000005fffc0 8388608 00:03:35.920 unregister 0x200000400000 10485760 PASSED 00:03:35.920 passed 00:03:35.920 00:03:35.920 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.920 suites 1 1 n/a 0 0 00:03:35.920 tests 1 1 1 0 0 00:03:35.920 asserts 15 15 15 0 n/a 00:03:35.920 00:03:35.920 Elapsed time = 0.045 seconds 00:03:35.920 00:03:35.920 real 0m0.217s 00:03:35.920 user 0m0.067s 00:03:35.920 sys 0m0.047s 00:03:35.921 ************************************ 00:03:35.921 END TEST env_mem_callbacks 00:03:35.921 ************************************ 00:03:35.921 06:01:55 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.921 06:01:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:35.921 ************************************ 00:03:35.921 END TEST env 00:03:35.921 ************************************ 00:03:35.921 00:03:35.921 real 0m6.822s 00:03:35.921 user 0m5.374s 00:03:35.921 sys 0m1.068s 00:03:35.921 06:01:55 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.921 06:01:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.921 06:01:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:35.921 06:01:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.921 06:01:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.921 06:01:55 -- common/autotest_common.sh@10 -- # set +x 00:03:35.921 ************************************ 00:03:35.921 START TEST rpc 00:03:35.921 ************************************ 00:03:35.921 06:01:55 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:35.921 * Looking for test storage... 00:03:35.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:35.921 06:01:55 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:35.921 06:01:55 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:35.921 06:01:55 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:35.921 06:01:55 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:35.921 06:01:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.921 06:01:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.921 06:01:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.921 06:01:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.921 06:01:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.921 06:01:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.921 06:01:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.921 06:01:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.921 06:01:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.921 06:01:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.921 06:01:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.921 06:01:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:35.921 06:01:55 rpc -- scripts/common.sh@345 -- # : 1 00:03:35.921 06:01:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.921 06:01:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.921 06:01:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:36.178 06:01:55 rpc -- scripts/common.sh@353 -- # local d=1 00:03:36.178 06:01:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.178 06:01:55 rpc -- scripts/common.sh@355 -- # echo 1 00:03:36.178 06:01:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.178 06:01:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:36.178 06:01:55 rpc -- scripts/common.sh@353 -- # local d=2 00:03:36.178 06:01:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.178 06:01:55 rpc -- scripts/common.sh@355 -- # echo 2 00:03:36.178 06:01:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.178 06:01:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.178 06:01:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.178 06:01:55 rpc -- scripts/common.sh@368 -- # return 0 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:36.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.178 --rc genhtml_branch_coverage=1 00:03:36.178 --rc genhtml_function_coverage=1 00:03:36.178 --rc genhtml_legend=1 00:03:36.178 --rc geninfo_all_blocks=1 00:03:36.178 --rc geninfo_unexecuted_blocks=1 00:03:36.178 00:03:36.178 ' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:36.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.178 --rc genhtml_branch_coverage=1 00:03:36.178 --rc genhtml_function_coverage=1 00:03:36.178 --rc genhtml_legend=1 00:03:36.178 --rc geninfo_all_blocks=1 00:03:36.178 --rc geninfo_unexecuted_blocks=1 00:03:36.178 00:03:36.178 ' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:36.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.178 --rc genhtml_branch_coverage=1 00:03:36.178 --rc genhtml_function_coverage=1 00:03:36.178 --rc genhtml_legend=1 00:03:36.178 --rc geninfo_all_blocks=1 00:03:36.178 --rc geninfo_unexecuted_blocks=1 00:03:36.178 00:03:36.178 ' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:36.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.178 --rc genhtml_branch_coverage=1 00:03:36.178 --rc genhtml_function_coverage=1 00:03:36.178 --rc genhtml_legend=1 00:03:36.178 --rc geninfo_all_blocks=1 00:03:36.178 --rc geninfo_unexecuted_blocks=1 00:03:36.178 00:03:36.178 ' 00:03:36.178 06:01:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57182 00:03:36.178 06:01:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.178 06:01:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57182 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@833 -- # '[' -z 57182 ']' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:36.178 06:01:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:36.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:36.178 06:01:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.178 [2024-11-20 06:01:55.650999] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:36.178 [2024-11-20 06:01:55.651244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57182 ] 00:03:36.436 [2024-11-20 06:01:55.811902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.436 [2024-11-20 06:01:55.911989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:36.436 [2024-11-20 06:01:55.912050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57182' to capture a snapshot of events at runtime. 00:03:36.436 [2024-11-20 06:01:55.912061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:36.436 [2024-11-20 06:01:55.912070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:36.436 [2024-11-20 06:01:55.912077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57182 for offline analysis/debug. 00:03:36.436 [2024-11-20 06:01:55.912926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.003 06:01:56 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:37.003 06:01:56 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:37.003 06:01:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:37.003 06:01:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:37.003 06:01:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:37.003 06:01:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:37.003 06:01:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.003 06:01:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.003 06:01:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.003 ************************************ 00:03:37.003 START TEST rpc_integrity 00:03:37.003 ************************************ 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.003 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:37.003 { 00:03:37.003 "name": "Malloc0", 00:03:37.003 "aliases": [ 00:03:37.003 "100c0677-98fb-41e1-93fe-da6d21cc8d65" 00:03:37.003 ], 00:03:37.003 "product_name": "Malloc disk", 00:03:37.003 "block_size": 512, 00:03:37.003 "num_blocks": 16384, 00:03:37.003 "uuid": "100c0677-98fb-41e1-93fe-da6d21cc8d65", 00:03:37.003 "assigned_rate_limits": { 00:03:37.003 "rw_ios_per_sec": 0, 00:03:37.003 "rw_mbytes_per_sec": 0, 00:03:37.003 "r_mbytes_per_sec": 0, 00:03:37.003 "w_mbytes_per_sec": 0 00:03:37.003 }, 00:03:37.003 "claimed": false, 00:03:37.003 "zoned": false, 00:03:37.003 "supported_io_types": { 00:03:37.003 "read": true, 00:03:37.003 "write": true, 00:03:37.003 "unmap": true, 00:03:37.003 "flush": true, 00:03:37.003 "reset": true, 00:03:37.003 "nvme_admin": false, 00:03:37.003 "nvme_io": false, 00:03:37.003 "nvme_io_md": false, 00:03:37.003 "write_zeroes": true, 00:03:37.003 "zcopy": true, 00:03:37.003 "get_zone_info": false, 00:03:37.003 "zone_management": false, 00:03:37.003 "zone_append": false, 00:03:37.003 "compare": false, 00:03:37.003 "compare_and_write": false, 00:03:37.003 "abort": true, 00:03:37.003 "seek_hole": false, 00:03:37.003 "seek_data": false, 00:03:37.003 "copy": true, 00:03:37.003 "nvme_iov_md": false 00:03:37.003 }, 00:03:37.003 "memory_domains": [ 00:03:37.003 { 00:03:37.003 "dma_device_id": "system", 00:03:37.003 "dma_device_type": 1 00:03:37.003 }, 00:03:37.003 { 00:03:37.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.003 "dma_device_type": 2 00:03:37.003 } 00:03:37.003 ], 00:03:37.003 "driver_specific": {} 00:03:37.003 } 00:03:37.003 ]' 00:03:37.003 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:37.261 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:37.261 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.261 [2024-11-20 06:01:56.639316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:37.261 [2024-11-20 06:01:56.639381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:37.261 [2024-11-20 06:01:56.639412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:37.261 [2024-11-20 06:01:56.639424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:37.261 [2024-11-20 06:01:56.641734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:37.261 [2024-11-20 06:01:56.641777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:37.261 Passthru0 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.261 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.261 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.261 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:37.261 { 00:03:37.261 "name": "Malloc0", 00:03:37.261 "aliases": [ 00:03:37.261 "100c0677-98fb-41e1-93fe-da6d21cc8d65" 00:03:37.261 ], 00:03:37.261 "product_name": "Malloc disk", 00:03:37.261 "block_size": 512, 00:03:37.261 "num_blocks": 16384, 00:03:37.261 "uuid": "100c0677-98fb-41e1-93fe-da6d21cc8d65", 00:03:37.261 "assigned_rate_limits": { 00:03:37.261 "rw_ios_per_sec": 0, 00:03:37.261 "rw_mbytes_per_sec": 0, 00:03:37.261 "r_mbytes_per_sec": 0, 00:03:37.261 "w_mbytes_per_sec": 0 00:03:37.261 }, 00:03:37.261 "claimed": true, 00:03:37.261 "claim_type": "exclusive_write", 00:03:37.261 "zoned": false, 00:03:37.261 "supported_io_types": { 00:03:37.261 "read": true, 00:03:37.261 "write": true, 00:03:37.261 "unmap": true, 00:03:37.261 "flush": true, 00:03:37.261 "reset": true, 00:03:37.261 "nvme_admin": false, 00:03:37.261 "nvme_io": false, 00:03:37.261 "nvme_io_md": false, 00:03:37.261 "write_zeroes": true, 00:03:37.261 "zcopy": true, 00:03:37.261 "get_zone_info": false, 00:03:37.261 "zone_management": false, 00:03:37.261 "zone_append": false, 00:03:37.261 "compare": false, 00:03:37.261 "compare_and_write": false, 00:03:37.261 "abort": true, 00:03:37.261 "seek_hole": false, 00:03:37.261 "seek_data": false, 00:03:37.261 "copy": true, 00:03:37.261 "nvme_iov_md": false 00:03:37.261 }, 00:03:37.261 "memory_domains": [ 00:03:37.261 { 00:03:37.261 "dma_device_id": "system", 00:03:37.262 "dma_device_type": 1 00:03:37.262 }, 00:03:37.262 { 00:03:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.262 "dma_device_type": 2 00:03:37.262 } 00:03:37.262 ], 00:03:37.262 "driver_specific": {} 00:03:37.262 }, 00:03:37.262 { 00:03:37.262 "name": "Passthru0", 00:03:37.262 "aliases": [ 00:03:37.262 "2c998032-e2f2-54bd-984e-3cc5424ed025" 00:03:37.262 ], 00:03:37.262 "product_name": "passthru", 00:03:37.262 "block_size": 512, 00:03:37.262 "num_blocks": 16384, 00:03:37.262 "uuid": "2c998032-e2f2-54bd-984e-3cc5424ed025", 00:03:37.262 "assigned_rate_limits": { 00:03:37.262 "rw_ios_per_sec": 0, 00:03:37.262 "rw_mbytes_per_sec": 0, 00:03:37.262 "r_mbytes_per_sec": 0, 00:03:37.262 "w_mbytes_per_sec": 0 00:03:37.262 }, 00:03:37.262 "claimed": false, 00:03:37.262 "zoned": false, 00:03:37.262 "supported_io_types": { 00:03:37.262 "read": true, 00:03:37.262 "write": true, 00:03:37.262 "unmap": true, 00:03:37.262 "flush": true, 00:03:37.262 "reset": true, 00:03:37.262 "nvme_admin": false, 00:03:37.262 "nvme_io": false, 00:03:37.262 "nvme_io_md": false, 00:03:37.262 "write_zeroes": true, 00:03:37.262 "zcopy": true, 00:03:37.262 "get_zone_info": false, 00:03:37.262 "zone_management": false, 00:03:37.262 "zone_append": false, 00:03:37.262 "compare": false, 00:03:37.262 "compare_and_write": false, 00:03:37.262 "abort": true, 00:03:37.262 "seek_hole": false, 00:03:37.262 "seek_data": false, 00:03:37.262 "copy": true, 00:03:37.262 "nvme_iov_md": false 00:03:37.262 }, 00:03:37.262 "memory_domains": [ 00:03:37.262 { 00:03:37.262 "dma_device_id": "system", 00:03:37.262 "dma_device_type": 1 00:03:37.262 }, 00:03:37.262 { 00:03:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.262 "dma_device_type": 2 00:03:37.262 } 00:03:37.262 ], 00:03:37.262 "driver_specific": { 00:03:37.262 "passthru": { 00:03:37.262 "name": "Passthru0", 00:03:37.262 "base_bdev_name": "Malloc0" 00:03:37.262 } 00:03:37.262 } 00:03:37.262 } 00:03:37.262 ]' 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:37.262 ************************************ 00:03:37.262 END TEST rpc_integrity 00:03:37.262 ************************************ 00:03:37.262 06:01:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:37.262 00:03:37.262 real 0m0.226s 00:03:37.262 user 0m0.120s 00:03:37.262 sys 0m0.027s 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:37.262 06:01:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.262 06:01:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.262 06:01:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 ************************************ 00:03:37.262 START TEST rpc_plugins 00:03:37.262 ************************************ 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:37.262 { 00:03:37.262 "name": "Malloc1", 00:03:37.262 "aliases": [ 00:03:37.262 "a98141f7-5eef-4dbc-80ae-8feeea73c3bd" 00:03:37.262 ], 00:03:37.262 "product_name": "Malloc disk", 00:03:37.262 "block_size": 4096, 00:03:37.262 "num_blocks": 256, 00:03:37.262 "uuid": "a98141f7-5eef-4dbc-80ae-8feeea73c3bd", 00:03:37.262 "assigned_rate_limits": { 00:03:37.262 "rw_ios_per_sec": 0, 00:03:37.262 "rw_mbytes_per_sec": 0, 00:03:37.262 "r_mbytes_per_sec": 0, 00:03:37.262 "w_mbytes_per_sec": 0 00:03:37.262 }, 00:03:37.262 "claimed": false, 00:03:37.262 "zoned": false, 00:03:37.262 "supported_io_types": { 00:03:37.262 "read": true, 00:03:37.262 "write": true, 00:03:37.262 "unmap": true, 00:03:37.262 "flush": true, 00:03:37.262 "reset": true, 00:03:37.262 "nvme_admin": false, 00:03:37.262 "nvme_io": false, 00:03:37.262 "nvme_io_md": false, 00:03:37.262 "write_zeroes": true, 00:03:37.262 "zcopy": true, 00:03:37.262 "get_zone_info": false, 00:03:37.262 "zone_management": false, 00:03:37.262 "zone_append": false, 00:03:37.262 "compare": false, 00:03:37.262 "compare_and_write": false, 00:03:37.262 "abort": true, 00:03:37.262 "seek_hole": false, 00:03:37.262 "seek_data": false, 00:03:37.262 "copy": true, 00:03:37.262 "nvme_iov_md": false 00:03:37.262 }, 00:03:37.262 "memory_domains": [ 00:03:37.262 { 00:03:37.262 "dma_device_id": "system", 00:03:37.262 "dma_device_type": 1 00:03:37.262 }, 00:03:37.262 { 00:03:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.262 "dma_device_type": 2 00:03:37.262 } 00:03:37.262 ], 00:03:37.262 "driver_specific": {} 00:03:37.262 } 00:03:37.262 ]' 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:37.262 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:37.262 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:37.520 ************************************ 00:03:37.520 END TEST rpc_plugins 00:03:37.520 ************************************ 00:03:37.520 06:01:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:37.520 00:03:37.520 real 0m0.118s 00:03:37.520 user 0m0.064s 00:03:37.520 sys 0m0.015s 00:03:37.520 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.520 06:01:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:37.520 06:01:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:37.520 06:01:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.520 06:01:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.520 06:01:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.520 ************************************ 00:03:37.520 START TEST rpc_trace_cmd_test 00:03:37.520 ************************************ 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:37.520 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57182", 00:03:37.520 "tpoint_group_mask": "0x8", 00:03:37.520 "iscsi_conn": { 00:03:37.520 "mask": "0x2", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "scsi": { 00:03:37.520 "mask": "0x4", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "bdev": { 00:03:37.520 "mask": "0x8", 00:03:37.520 "tpoint_mask": "0xffffffffffffffff" 00:03:37.520 }, 00:03:37.520 "nvmf_rdma": { 00:03:37.520 "mask": "0x10", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "nvmf_tcp": { 00:03:37.520 "mask": "0x20", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "ftl": { 00:03:37.520 "mask": "0x40", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "blobfs": { 00:03:37.520 "mask": "0x80", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "dsa": { 00:03:37.520 "mask": "0x200", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "thread": { 00:03:37.520 "mask": "0x400", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "nvme_pcie": { 00:03:37.520 "mask": "0x800", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "iaa": { 00:03:37.520 "mask": "0x1000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "nvme_tcp": { 00:03:37.520 "mask": "0x2000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "bdev_nvme": { 00:03:37.520 "mask": "0x4000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "sock": { 00:03:37.520 "mask": "0x8000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "blob": { 00:03:37.520 "mask": "0x10000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "bdev_raid": { 00:03:37.520 "mask": "0x20000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 }, 00:03:37.520 "scheduler": { 00:03:37.520 "mask": "0x40000", 00:03:37.520 "tpoint_mask": "0x0" 00:03:37.520 } 00:03:37.520 }' 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:37.520 06:01:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:37.520 ************************************ 00:03:37.520 END TEST rpc_trace_cmd_test 00:03:37.520 ************************************ 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:37.520 00:03:37.520 real 0m0.173s 00:03:37.520 user 0m0.143s 00:03:37.520 sys 0m0.020s 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.520 06:01:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:37.779 06:01:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:37.779 06:01:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:37.779 06:01:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:37.779 06:01:57 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.779 06:01:57 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.779 06:01:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.779 ************************************ 00:03:37.779 START TEST rpc_daemon_integrity 00:03:37.779 ************************************ 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.779 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:37.779 { 00:03:37.779 "name": "Malloc2", 00:03:37.779 "aliases": [ 00:03:37.779 "9b67426b-a292-470d-8926-d358d4427618" 00:03:37.779 ], 00:03:37.779 "product_name": "Malloc disk", 00:03:37.779 "block_size": 512, 00:03:37.779 "num_blocks": 16384, 00:03:37.779 "uuid": "9b67426b-a292-470d-8926-d358d4427618", 00:03:37.779 "assigned_rate_limits": { 00:03:37.779 "rw_ios_per_sec": 0, 00:03:37.779 "rw_mbytes_per_sec": 0, 00:03:37.779 "r_mbytes_per_sec": 0, 00:03:37.779 "w_mbytes_per_sec": 0 00:03:37.779 }, 00:03:37.779 "claimed": false, 00:03:37.779 "zoned": false, 00:03:37.779 "supported_io_types": { 00:03:37.780 "read": true, 00:03:37.780 "write": true, 00:03:37.780 "unmap": true, 00:03:37.780 "flush": true, 00:03:37.780 "reset": true, 00:03:37.780 "nvme_admin": false, 00:03:37.780 "nvme_io": false, 00:03:37.780 "nvme_io_md": false, 00:03:37.780 "write_zeroes": true, 00:03:37.780 "zcopy": true, 00:03:37.780 "get_zone_info": false, 00:03:37.780 "zone_management": false, 00:03:37.780 "zone_append": false, 00:03:37.780 "compare": false, 00:03:37.780 "compare_and_write": false, 00:03:37.780 "abort": true, 00:03:37.780 "seek_hole": false, 00:03:37.780 "seek_data": false, 00:03:37.780 "copy": true, 00:03:37.780 "nvme_iov_md": false 00:03:37.780 }, 00:03:37.780 "memory_domains": [ 00:03:37.780 { 00:03:37.780 "dma_device_id": "system", 00:03:37.780 "dma_device_type": 1 00:03:37.780 }, 00:03:37.780 { 00:03:37.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.780 "dma_device_type": 2 00:03:37.780 } 00:03:37.780 ], 00:03:37.780 "driver_specific": {} 00:03:37.780 } 00:03:37.780 ]' 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.780 [2024-11-20 06:01:57.267483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:37.780 [2024-11-20 06:01:57.267557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:37.780 [2024-11-20 06:01:57.267579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:37.780 [2024-11-20 06:01:57.267590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:37.780 [2024-11-20 06:01:57.269872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:37.780 [2024-11-20 06:01:57.269912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:37.780 Passthru0 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:37.780 { 00:03:37.780 "name": "Malloc2", 00:03:37.780 "aliases": [ 00:03:37.780 "9b67426b-a292-470d-8926-d358d4427618" 00:03:37.780 ], 00:03:37.780 "product_name": "Malloc disk", 00:03:37.780 "block_size": 512, 00:03:37.780 "num_blocks": 16384, 00:03:37.780 "uuid": "9b67426b-a292-470d-8926-d358d4427618", 00:03:37.780 "assigned_rate_limits": { 00:03:37.780 "rw_ios_per_sec": 0, 00:03:37.780 "rw_mbytes_per_sec": 0, 00:03:37.780 "r_mbytes_per_sec": 0, 00:03:37.780 "w_mbytes_per_sec": 0 00:03:37.780 }, 00:03:37.780 "claimed": true, 00:03:37.780 "claim_type": "exclusive_write", 00:03:37.780 "zoned": false, 00:03:37.780 "supported_io_types": { 00:03:37.780 "read": true, 00:03:37.780 "write": true, 00:03:37.780 "unmap": true, 00:03:37.780 "flush": true, 00:03:37.780 "reset": true, 00:03:37.780 "nvme_admin": false, 00:03:37.780 "nvme_io": false, 00:03:37.780 "nvme_io_md": false, 00:03:37.780 "write_zeroes": true, 00:03:37.780 "zcopy": true, 00:03:37.780 "get_zone_info": false, 00:03:37.780 "zone_management": false, 00:03:37.780 "zone_append": false, 00:03:37.780 "compare": false, 00:03:37.780 "compare_and_write": false, 00:03:37.780 "abort": true, 00:03:37.780 "seek_hole": false, 00:03:37.780 "seek_data": false, 00:03:37.780 "copy": true, 00:03:37.780 "nvme_iov_md": false 00:03:37.780 }, 00:03:37.780 "memory_domains": [ 00:03:37.780 { 00:03:37.780 "dma_device_id": "system", 00:03:37.780 "dma_device_type": 1 00:03:37.780 }, 00:03:37.780 { 00:03:37.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.780 "dma_device_type": 2 00:03:37.780 } 00:03:37.780 ], 00:03:37.780 "driver_specific": {} 00:03:37.780 }, 00:03:37.780 { 00:03:37.780 "name": "Passthru0", 00:03:37.780 "aliases": [ 00:03:37.780 "bce7ec93-ea68-52ad-8008-23181294067f" 00:03:37.780 ], 00:03:37.780 "product_name": "passthru", 00:03:37.780 "block_size": 512, 00:03:37.780 "num_blocks": 16384, 00:03:37.780 "uuid": "bce7ec93-ea68-52ad-8008-23181294067f", 00:03:37.780 "assigned_rate_limits": { 00:03:37.780 "rw_ios_per_sec": 0, 00:03:37.780 "rw_mbytes_per_sec": 0, 00:03:37.780 "r_mbytes_per_sec": 0, 00:03:37.780 "w_mbytes_per_sec": 0 00:03:37.780 }, 00:03:37.780 "claimed": false, 00:03:37.780 "zoned": false, 00:03:37.780 "supported_io_types": { 00:03:37.780 "read": true, 00:03:37.780 "write": true, 00:03:37.780 "unmap": true, 00:03:37.780 "flush": true, 00:03:37.780 "reset": true, 00:03:37.780 "nvme_admin": false, 00:03:37.780 "nvme_io": false, 00:03:37.780 "nvme_io_md": false, 00:03:37.780 "write_zeroes": true, 00:03:37.780 "zcopy": true, 00:03:37.780 "get_zone_info": false, 00:03:37.780 "zone_management": false, 00:03:37.780 "zone_append": false, 00:03:37.780 "compare": false, 00:03:37.780 "compare_and_write": false, 00:03:37.780 "abort": true, 00:03:37.780 "seek_hole": false, 00:03:37.780 "seek_data": false, 00:03:37.780 "copy": true, 00:03:37.780 "nvme_iov_md": false 00:03:37.780 }, 00:03:37.780 "memory_domains": [ 00:03:37.780 { 00:03:37.780 "dma_device_id": "system", 00:03:37.780 "dma_device_type": 1 00:03:37.780 }, 00:03:37.780 { 00:03:37.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:37.780 "dma_device_type": 2 00:03:37.780 } 00:03:37.780 ], 00:03:37.780 "driver_specific": { 00:03:37.780 "passthru": { 00:03:37.780 "name": "Passthru0", 00:03:37.780 "base_bdev_name": "Malloc2" 00:03:37.780 } 00:03:37.780 } 00:03:37.780 } 00:03:37.780 ]' 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.780 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:37.781 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:38.038 ************************************ 00:03:38.038 END TEST rpc_daemon_integrity 00:03:38.038 ************************************ 00:03:38.038 06:01:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:38.038 00:03:38.038 real 0m0.252s 00:03:38.038 user 0m0.135s 00:03:38.038 sys 0m0.026s 00:03:38.038 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:38.038 06:01:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.038 06:01:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:38.038 06:01:57 rpc -- rpc/rpc.sh@84 -- # killprocess 57182 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@952 -- # '[' -z 57182 ']' 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@956 -- # kill -0 57182 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@957 -- # uname 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57182 00:03:38.038 killing process with pid 57182 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57182' 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@971 -- # kill 57182 00:03:38.038 06:01:57 rpc -- common/autotest_common.sh@976 -- # wait 57182 00:03:39.408 00:03:39.408 real 0m3.560s 00:03:39.408 user 0m4.010s 00:03:39.408 sys 0m0.577s 00:03:39.408 06:01:58 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.408 06:01:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.408 ************************************ 00:03:39.408 END TEST rpc 00:03:39.408 ************************************ 00:03:39.408 06:01:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:39.408 06:01:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.408 06:01:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.408 06:01:59 -- common/autotest_common.sh@10 -- # set +x 00:03:39.408 ************************************ 00:03:39.408 START TEST skip_rpc 00:03:39.408 ************************************ 00:03:39.408 06:01:59 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:39.665 * Looking for test storage... 00:03:39.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.665 06:01:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.665 06:01:59 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.665 --rc genhtml_branch_coverage=1 00:03:39.665 --rc genhtml_function_coverage=1 00:03:39.666 --rc genhtml_legend=1 00:03:39.666 --rc geninfo_all_blocks=1 00:03:39.666 --rc geninfo_unexecuted_blocks=1 00:03:39.666 00:03:39.666 ' 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.666 --rc genhtml_branch_coverage=1 00:03:39.666 --rc genhtml_function_coverage=1 00:03:39.666 --rc genhtml_legend=1 00:03:39.666 --rc geninfo_all_blocks=1 00:03:39.666 --rc geninfo_unexecuted_blocks=1 00:03:39.666 00:03:39.666 ' 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.666 --rc genhtml_branch_coverage=1 00:03:39.666 --rc genhtml_function_coverage=1 00:03:39.666 --rc genhtml_legend=1 00:03:39.666 --rc geninfo_all_blocks=1 00:03:39.666 --rc geninfo_unexecuted_blocks=1 00:03:39.666 00:03:39.666 ' 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.666 --rc genhtml_branch_coverage=1 00:03:39.666 --rc genhtml_function_coverage=1 00:03:39.666 --rc genhtml_legend=1 00:03:39.666 --rc geninfo_all_blocks=1 00:03:39.666 --rc geninfo_unexecuted_blocks=1 00:03:39.666 00:03:39.666 ' 00:03:39.666 06:01:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:39.666 06:01:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:39.666 06:01:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.666 06:01:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.666 ************************************ 00:03:39.666 START TEST skip_rpc 00:03:39.666 ************************************ 00:03:39.666 06:01:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:39.666 06:01:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57394 00:03:39.666 06:01:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.666 06:01:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:39.666 06:01:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:39.666 [2024-11-20 06:01:59.209075] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:39.666 [2024-11-20 06:01:59.209345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57394 ] 00:03:39.923 [2024-11-20 06:01:59.366025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.923 [2024-11-20 06:01:59.473367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57394 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57394 ']' 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57394 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57394 00:03:45.190 killing process with pid 57394 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57394' 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57394 00:03:45.190 06:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57394 00:03:46.122 ************************************ 00:03:46.122 END TEST skip_rpc 00:03:46.122 ************************************ 00:03:46.122 00:03:46.122 real 0m6.428s 00:03:46.122 user 0m6.022s 00:03:46.122 sys 0m0.295s 00:03:46.122 06:02:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.122 06:02:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.122 06:02:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:46.122 06:02:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.122 06:02:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.122 06:02:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.122 ************************************ 00:03:46.122 START TEST skip_rpc_with_json 00:03:46.122 ************************************ 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57493 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57493 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57493 ']' 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:46.122 06:02:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.122 [2024-11-20 06:02:05.715582] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:46.122 [2024-11-20 06:02:05.715705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57493 ] 00:03:46.380 [2024-11-20 06:02:05.873318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.380 [2024-11-20 06:02:05.958054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.945 [2024-11-20 06:02:06.516232] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:46.945 request: 00:03:46.945 { 00:03:46.945 "trtype": "tcp", 00:03:46.945 "method": "nvmf_get_transports", 00:03:46.945 "req_id": 1 00:03:46.945 } 00:03:46.945 Got JSON-RPC error response 00:03:46.945 response: 00:03:46.945 { 00:03:46.945 "code": -19, 00:03:46.945 "message": "No such device" 00:03:46.945 } 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.945 [2024-11-20 06:02:06.524318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.945 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.203 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.203 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:47.203 { 00:03:47.203 "subsystems": [ 00:03:47.203 { 00:03:47.203 "subsystem": "fsdev", 00:03:47.203 "config": [ 00:03:47.203 { 00:03:47.203 "method": "fsdev_set_opts", 00:03:47.203 "params": { 00:03:47.203 "fsdev_io_pool_size": 65535, 00:03:47.203 "fsdev_io_cache_size": 256 00:03:47.203 } 00:03:47.203 } 00:03:47.203 ] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "keyring", 00:03:47.203 "config": [] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "iobuf", 00:03:47.203 "config": [ 00:03:47.203 { 00:03:47.203 "method": "iobuf_set_options", 00:03:47.203 "params": { 00:03:47.203 "small_pool_count": 8192, 00:03:47.203 "large_pool_count": 1024, 00:03:47.203 "small_bufsize": 8192, 00:03:47.203 "large_bufsize": 135168, 00:03:47.203 "enable_numa": false 00:03:47.203 } 00:03:47.203 } 00:03:47.203 ] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "sock", 00:03:47.203 "config": [ 00:03:47.203 { 00:03:47.203 "method": "sock_set_default_impl", 00:03:47.203 "params": { 00:03:47.203 "impl_name": "posix" 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "sock_impl_set_options", 00:03:47.203 "params": { 00:03:47.203 "impl_name": "ssl", 00:03:47.203 "recv_buf_size": 4096, 00:03:47.203 "send_buf_size": 4096, 00:03:47.203 "enable_recv_pipe": true, 00:03:47.203 "enable_quickack": false, 00:03:47.203 "enable_placement_id": 0, 00:03:47.203 "enable_zerocopy_send_server": true, 00:03:47.203 "enable_zerocopy_send_client": false, 00:03:47.203 "zerocopy_threshold": 0, 00:03:47.203 "tls_version": 0, 00:03:47.203 "enable_ktls": false 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "sock_impl_set_options", 00:03:47.203 "params": { 00:03:47.203 "impl_name": "posix", 00:03:47.203 "recv_buf_size": 2097152, 00:03:47.203 "send_buf_size": 2097152, 00:03:47.203 "enable_recv_pipe": true, 00:03:47.203 "enable_quickack": false, 00:03:47.203 "enable_placement_id": 0, 00:03:47.203 "enable_zerocopy_send_server": true, 00:03:47.203 "enable_zerocopy_send_client": false, 00:03:47.203 "zerocopy_threshold": 0, 00:03:47.203 "tls_version": 0, 00:03:47.203 "enable_ktls": false 00:03:47.203 } 00:03:47.203 } 00:03:47.203 ] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "vmd", 00:03:47.203 "config": [] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "accel", 00:03:47.203 "config": [ 00:03:47.203 { 00:03:47.203 "method": "accel_set_options", 00:03:47.203 "params": { 00:03:47.203 "small_cache_size": 128, 00:03:47.203 "large_cache_size": 16, 00:03:47.203 "task_count": 2048, 00:03:47.203 "sequence_count": 2048, 00:03:47.203 "buf_count": 2048 00:03:47.203 } 00:03:47.203 } 00:03:47.203 ] 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "subsystem": "bdev", 00:03:47.203 "config": [ 00:03:47.203 { 00:03:47.203 "method": "bdev_set_options", 00:03:47.203 "params": { 00:03:47.203 "bdev_io_pool_size": 65535, 00:03:47.203 "bdev_io_cache_size": 256, 00:03:47.203 "bdev_auto_examine": true, 00:03:47.203 "iobuf_small_cache_size": 128, 00:03:47.203 "iobuf_large_cache_size": 16 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "bdev_raid_set_options", 00:03:47.203 "params": { 00:03:47.203 "process_window_size_kb": 1024, 00:03:47.203 "process_max_bandwidth_mb_sec": 0 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "bdev_iscsi_set_options", 00:03:47.203 "params": { 00:03:47.203 "timeout_sec": 30 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "bdev_nvme_set_options", 00:03:47.203 "params": { 00:03:47.203 "action_on_timeout": "none", 00:03:47.203 "timeout_us": 0, 00:03:47.203 "timeout_admin_us": 0, 00:03:47.203 "keep_alive_timeout_ms": 10000, 00:03:47.203 "arbitration_burst": 0, 00:03:47.203 "low_priority_weight": 0, 00:03:47.203 "medium_priority_weight": 0, 00:03:47.203 "high_priority_weight": 0, 00:03:47.203 "nvme_adminq_poll_period_us": 10000, 00:03:47.203 "nvme_ioq_poll_period_us": 0, 00:03:47.203 "io_queue_requests": 0, 00:03:47.203 "delay_cmd_submit": true, 00:03:47.203 "transport_retry_count": 4, 00:03:47.203 "bdev_retry_count": 3, 00:03:47.203 "transport_ack_timeout": 0, 00:03:47.203 "ctrlr_loss_timeout_sec": 0, 00:03:47.203 "reconnect_delay_sec": 0, 00:03:47.203 "fast_io_fail_timeout_sec": 0, 00:03:47.203 "disable_auto_failback": false, 00:03:47.203 "generate_uuids": false, 00:03:47.203 "transport_tos": 0, 00:03:47.203 "nvme_error_stat": false, 00:03:47.203 "rdma_srq_size": 0, 00:03:47.203 "io_path_stat": false, 00:03:47.203 "allow_accel_sequence": false, 00:03:47.203 "rdma_max_cq_size": 0, 00:03:47.203 "rdma_cm_event_timeout_ms": 0, 00:03:47.203 "dhchap_digests": [ 00:03:47.203 "sha256", 00:03:47.203 "sha384", 00:03:47.203 "sha512" 00:03:47.203 ], 00:03:47.203 "dhchap_dhgroups": [ 00:03:47.203 "null", 00:03:47.203 "ffdhe2048", 00:03:47.203 "ffdhe3072", 00:03:47.203 "ffdhe4096", 00:03:47.203 "ffdhe6144", 00:03:47.203 "ffdhe8192" 00:03:47.203 ] 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "bdev_nvme_set_hotplug", 00:03:47.203 "params": { 00:03:47.203 "period_us": 100000, 00:03:47.203 "enable": false 00:03:47.203 } 00:03:47.203 }, 00:03:47.203 { 00:03:47.203 "method": "bdev_wait_for_examine" 00:03:47.203 } 00:03:47.203 ] 00:03:47.203 }, 00:03:47.203 { 00:03:47.204 "subsystem": "scsi", 00:03:47.204 "config": null 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "scheduler", 00:03:47.204 "config": [ 00:03:47.204 { 00:03:47.204 "method": "framework_set_scheduler", 00:03:47.204 "params": { 00:03:47.204 "name": "static" 00:03:47.204 } 00:03:47.204 } 00:03:47.204 ] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "vhost_scsi", 00:03:47.204 "config": [] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "vhost_blk", 00:03:47.204 "config": [] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "ublk", 00:03:47.204 "config": [] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "nbd", 00:03:47.204 "config": [] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "nvmf", 00:03:47.204 "config": [ 00:03:47.204 { 00:03:47.204 "method": "nvmf_set_config", 00:03:47.204 "params": { 00:03:47.204 "discovery_filter": "match_any", 00:03:47.204 "admin_cmd_passthru": { 00:03:47.204 "identify_ctrlr": false 00:03:47.204 }, 00:03:47.204 "dhchap_digests": [ 00:03:47.204 "sha256", 00:03:47.204 "sha384", 00:03:47.204 "sha512" 00:03:47.204 ], 00:03:47.204 "dhchap_dhgroups": [ 00:03:47.204 "null", 00:03:47.204 "ffdhe2048", 00:03:47.204 "ffdhe3072", 00:03:47.204 "ffdhe4096", 00:03:47.204 "ffdhe6144", 00:03:47.204 "ffdhe8192" 00:03:47.204 ] 00:03:47.204 } 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "method": "nvmf_set_max_subsystems", 00:03:47.204 "params": { 00:03:47.204 "max_subsystems": 1024 00:03:47.204 } 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "method": "nvmf_set_crdt", 00:03:47.204 "params": { 00:03:47.204 "crdt1": 0, 00:03:47.204 "crdt2": 0, 00:03:47.204 "crdt3": 0 00:03:47.204 } 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "method": "nvmf_create_transport", 00:03:47.204 "params": { 00:03:47.204 "trtype": "TCP", 00:03:47.204 "max_queue_depth": 128, 00:03:47.204 "max_io_qpairs_per_ctrlr": 127, 00:03:47.204 "in_capsule_data_size": 4096, 00:03:47.204 "max_io_size": 131072, 00:03:47.204 "io_unit_size": 131072, 00:03:47.204 "max_aq_depth": 128, 00:03:47.204 "num_shared_buffers": 511, 00:03:47.204 "buf_cache_size": 4294967295, 00:03:47.204 "dif_insert_or_strip": false, 00:03:47.204 "zcopy": false, 00:03:47.204 "c2h_success": true, 00:03:47.204 "sock_priority": 0, 00:03:47.204 "abort_timeout_sec": 1, 00:03:47.204 "ack_timeout": 0, 00:03:47.204 "data_wr_pool_size": 0 00:03:47.204 } 00:03:47.204 } 00:03:47.204 ] 00:03:47.204 }, 00:03:47.204 { 00:03:47.204 "subsystem": "iscsi", 00:03:47.204 "config": [ 00:03:47.204 { 00:03:47.204 "method": "iscsi_set_options", 00:03:47.204 "params": { 00:03:47.204 "node_base": "iqn.2016-06.io.spdk", 00:03:47.204 "max_sessions": 128, 00:03:47.204 "max_connections_per_session": 2, 00:03:47.204 "max_queue_depth": 64, 00:03:47.204 "default_time2wait": 2, 00:03:47.204 "default_time2retain": 20, 00:03:47.204 "first_burst_length": 8192, 00:03:47.204 "immediate_data": true, 00:03:47.204 "allow_duplicated_isid": false, 00:03:47.204 "error_recovery_level": 0, 00:03:47.204 "nop_timeout": 60, 00:03:47.204 "nop_in_interval": 30, 00:03:47.204 "disable_chap": false, 00:03:47.204 "require_chap": false, 00:03:47.204 "mutual_chap": false, 00:03:47.204 "chap_group": 0, 00:03:47.204 "max_large_datain_per_connection": 64, 00:03:47.204 "max_r2t_per_connection": 4, 00:03:47.204 "pdu_pool_size": 36864, 00:03:47.204 "immediate_data_pool_size": 16384, 00:03:47.204 "data_out_pool_size": 2048 00:03:47.204 } 00:03:47.204 } 00:03:47.204 ] 00:03:47.204 } 00:03:47.204 ] 00:03:47.204 } 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57493 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57493 ']' 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57493 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57493 00:03:47.204 killing process with pid 57493 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57493' 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57493 00:03:47.204 06:02:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57493 00:03:48.574 06:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57527 00:03:48.574 06:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:48.574 06:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57527 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57527 ']' 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57527 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57527 00:03:53.832 killing process with pid 57527 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57527' 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57527 00:03:53.832 06:02:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57527 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:54.765 ************************************ 00:03:54.765 END TEST skip_rpc_with_json 00:03:54.765 ************************************ 00:03:54.765 00:03:54.765 real 0m8.540s 00:03:54.765 user 0m8.149s 00:03:54.765 sys 0m0.608s 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.765 06:02:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:54.765 06:02:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.765 06:02:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.765 06:02:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.765 ************************************ 00:03:54.765 START TEST skip_rpc_with_delay 00:03:54.765 ************************************ 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:54.765 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.766 [2024-11-20 06:02:14.291589] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.766 00:03:54.766 real 0m0.160s 00:03:54.766 user 0m0.084s 00:03:54.766 sys 0m0.070s 00:03:54.766 ************************************ 00:03:54.766 END TEST skip_rpc_with_delay 00:03:54.766 ************************************ 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.766 06:02:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:54.766 06:02:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:54.766 06:02:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:54.766 06:02:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:54.766 06:02:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.766 06:02:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.766 06:02:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.766 ************************************ 00:03:54.766 START TEST exit_on_failed_rpc_init 00:03:54.766 ************************************ 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57649 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57649 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57649 ']' 00:03:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.766 06:02:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.023 [2024-11-20 06:02:14.464150] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:55.023 [2024-11-20 06:02:14.464742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57649 ] 00:03:55.024 [2024-11-20 06:02:14.614087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.282 [2024-11-20 06:02:14.701437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.845 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.845 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:55.845 06:02:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.845 06:02:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.845 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:55.846 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.846 [2024-11-20 06:02:15.352201] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:55.846 [2024-11-20 06:02:15.352499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57667 ] 00:03:56.103 [2024-11-20 06:02:15.511900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.103 [2024-11-20 06:02:15.613479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:56.103 [2024-11-20 06:02:15.613564] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:56.103 [2024-11-20 06:02:15.613577] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:56.103 [2024-11-20 06:02:15.613590] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:56.360 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:56.360 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:56.360 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57649 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57649 ']' 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57649 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57649 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:56.361 killing process with pid 57649 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57649' 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57649 00:03:56.361 06:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57649 00:03:57.732 00:03:57.732 real 0m2.656s 00:03:57.732 user 0m2.927s 00:03:57.732 sys 0m0.423s 00:03:57.732 ************************************ 00:03:57.732 END TEST exit_on_failed_rpc_init 00:03:57.732 ************************************ 00:03:57.732 06:02:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.732 06:02:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.732 06:02:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:57.732 00:03:57.732 real 0m18.064s 00:03:57.732 user 0m17.303s 00:03:57.732 sys 0m1.554s 00:03:57.732 06:02:17 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.732 ************************************ 00:03:57.732 END TEST skip_rpc 00:03:57.732 ************************************ 00:03:57.732 06:02:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.732 06:02:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:57.732 06:02:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.732 06:02:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.732 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:57.732 ************************************ 00:03:57.732 START TEST rpc_client 00:03:57.732 ************************************ 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:57.732 * Looking for test storage... 00:03:57.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.732 06:02:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.732 --rc genhtml_branch_coverage=1 00:03:57.732 --rc genhtml_function_coverage=1 00:03:57.732 --rc genhtml_legend=1 00:03:57.732 --rc geninfo_all_blocks=1 00:03:57.732 --rc geninfo_unexecuted_blocks=1 00:03:57.732 00:03:57.732 ' 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.732 --rc genhtml_branch_coverage=1 00:03:57.732 --rc genhtml_function_coverage=1 00:03:57.732 --rc genhtml_legend=1 00:03:57.732 --rc geninfo_all_blocks=1 00:03:57.732 --rc geninfo_unexecuted_blocks=1 00:03:57.732 00:03:57.732 ' 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.732 --rc genhtml_branch_coverage=1 00:03:57.732 --rc genhtml_function_coverage=1 00:03:57.732 --rc genhtml_legend=1 00:03:57.732 --rc geninfo_all_blocks=1 00:03:57.732 --rc geninfo_unexecuted_blocks=1 00:03:57.732 00:03:57.732 ' 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.732 --rc genhtml_branch_coverage=1 00:03:57.732 --rc genhtml_function_coverage=1 00:03:57.732 --rc genhtml_legend=1 00:03:57.732 --rc geninfo_all_blocks=1 00:03:57.732 --rc geninfo_unexecuted_blocks=1 00:03:57.732 00:03:57.732 ' 00:03:57.732 06:02:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:57.732 OK 00:03:57.732 06:02:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:57.732 00:03:57.732 real 0m0.185s 00:03:57.732 user 0m0.106s 00:03:57.732 sys 0m0.088s 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.732 06:02:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:57.732 ************************************ 00:03:57.732 END TEST rpc_client 00:03:57.732 ************************************ 00:03:57.732 06:02:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:57.732 06:02:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.732 06:02:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.732 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:57.732 ************************************ 00:03:57.732 START TEST json_config 00:03:57.732 ************************************ 00:03:57.732 06:02:17 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:57.990 06:02:17 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.990 06:02:17 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.990 06:02:17 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.990 06:02:17 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.990 06:02:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.990 06:02:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.990 06:02:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.990 06:02:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.990 06:02:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.990 06:02:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.990 06:02:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.990 06:02:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.990 06:02:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.990 06:02:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:57.990 06:02:17 json_config -- scripts/common.sh@345 -- # : 1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.990 06:02:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.990 06:02:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@353 -- # local d=1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.990 06:02:17 json_config -- scripts/common.sh@355 -- # echo 1 00:03:57.990 06:02:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.991 06:02:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:57.991 06:02:17 json_config -- scripts/common.sh@353 -- # local d=2 00:03:57.991 06:02:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.991 06:02:17 json_config -- scripts/common.sh@355 -- # echo 2 00:03:57.991 06:02:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.991 06:02:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.991 06:02:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.991 06:02:17 json_config -- scripts/common.sh@368 -- # return 0 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.991 --rc genhtml_branch_coverage=1 00:03:57.991 --rc genhtml_function_coverage=1 00:03:57.991 --rc genhtml_legend=1 00:03:57.991 --rc geninfo_all_blocks=1 00:03:57.991 --rc geninfo_unexecuted_blocks=1 00:03:57.991 00:03:57.991 ' 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.991 --rc genhtml_branch_coverage=1 00:03:57.991 --rc genhtml_function_coverage=1 00:03:57.991 --rc genhtml_legend=1 00:03:57.991 --rc geninfo_all_blocks=1 00:03:57.991 --rc geninfo_unexecuted_blocks=1 00:03:57.991 00:03:57.991 ' 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.991 --rc genhtml_branch_coverage=1 00:03:57.991 --rc genhtml_function_coverage=1 00:03:57.991 --rc genhtml_legend=1 00:03:57.991 --rc geninfo_all_blocks=1 00:03:57.991 --rc geninfo_unexecuted_blocks=1 00:03:57.991 00:03:57.991 ' 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.991 --rc genhtml_branch_coverage=1 00:03:57.991 --rc genhtml_function_coverage=1 00:03:57.991 --rc genhtml_legend=1 00:03:57.991 --rc geninfo_all_blocks=1 00:03:57.991 --rc geninfo_unexecuted_blocks=1 00:03:57.991 00:03:57.991 ' 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:57.991 06:02:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.991 06:02:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.991 06:02:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.991 06:02:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.991 06:02:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.991 06:02:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.991 06:02:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.991 06:02:17 json_config -- paths/export.sh@5 -- # export PATH 00:03:57.991 06:02:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@51 -- # : 0 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.991 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.991 06:02:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:03:57.991 WARNING: No tests are enabled so not running JSON configuration tests 00:03:57.991 06:02:17 json_config -- json_config/json_config.sh@28 -- # exit 0 00:03:57.991 00:03:57.991 real 0m0.139s 00:03:57.991 user 0m0.095s 00:03:57.991 sys 0m0.043s 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.991 06:02:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.991 ************************************ 00:03:57.991 END TEST json_config 00:03:57.991 ************************************ 00:03:57.991 06:02:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.991 06:02:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.991 06:02:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.991 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:57.991 ************************************ 00:03:57.991 START TEST json_config_extra_key 00:03:57.991 ************************************ 00:03:57.991 06:02:17 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.991 06:02:17 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.991 06:02:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.991 06:02:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.991 06:02:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:58.249 06:02:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.250 --rc genhtml_branch_coverage=1 00:03:58.250 --rc genhtml_function_coverage=1 00:03:58.250 --rc genhtml_legend=1 00:03:58.250 --rc geninfo_all_blocks=1 00:03:58.250 --rc geninfo_unexecuted_blocks=1 00:03:58.250 00:03:58.250 ' 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.250 --rc genhtml_branch_coverage=1 00:03:58.250 --rc genhtml_function_coverage=1 00:03:58.250 --rc genhtml_legend=1 00:03:58.250 --rc geninfo_all_blocks=1 00:03:58.250 --rc geninfo_unexecuted_blocks=1 00:03:58.250 00:03:58.250 ' 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.250 --rc genhtml_branch_coverage=1 00:03:58.250 --rc genhtml_function_coverage=1 00:03:58.250 --rc genhtml_legend=1 00:03:58.250 --rc geninfo_all_blocks=1 00:03:58.250 --rc geninfo_unexecuted_blocks=1 00:03:58.250 00:03:58.250 ' 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.250 --rc genhtml_branch_coverage=1 00:03:58.250 --rc genhtml_function_coverage=1 00:03:58.250 --rc genhtml_legend=1 00:03:58.250 --rc geninfo_all_blocks=1 00:03:58.250 --rc geninfo_unexecuted_blocks=1 00:03:58.250 00:03:58.250 ' 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b6828c9e-a976-459e-9e48-80a08ea9ebe5 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.250 06:02:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.250 06:02:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.250 06:02:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.250 06:02:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.250 06:02:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:58.250 06:02:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:58.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:58.250 06:02:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:58.250 INFO: launching applications... 00:03:58.250 06:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57861 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.250 Waiting for target to run... 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57861 /var/tmp/spdk_tgt.sock 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57861 ']' 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.250 06:02:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.250 06:02:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.250 [2024-11-20 06:02:17.740942] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:03:58.250 [2024-11-20 06:02:17.741230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57861 ] 00:03:58.509 [2024-11-20 06:02:18.067287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.766 [2024-11-20 06:02:18.159819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.331 00:03:59.331 INFO: shutting down applications... 00:03:59.331 06:02:18 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:59.331 06:02:18 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:03:59.331 06:02:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.332 06:02:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.332 06:02:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57861 ]] 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57861 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:03:59.332 06:02:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.589 06:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.589 06:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.589 06:02:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:03:59.589 06:02:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.153 06:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.153 06:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.153 06:02:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:00.153 06:02:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.717 06:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.717 06:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.717 06:02:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:00.717 06:02:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:01.282 SPDK target shutdown done 00:04:01.282 Success 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:01.282 06:02:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:01.282 06:02:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:01.282 00:04:01.282 real 0m3.159s 00:04:01.282 user 0m2.761s 00:04:01.282 sys 0m0.412s 00:04:01.282 ************************************ 00:04:01.282 END TEST json_config_extra_key 00:04:01.282 ************************************ 00:04:01.282 06:02:20 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.282 06:02:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.282 06:02:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.282 06:02:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.282 06:02:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.282 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:04:01.282 ************************************ 00:04:01.282 START TEST alias_rpc 00:04:01.282 ************************************ 00:04:01.282 06:02:20 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:01.282 * Looking for test storage... 00:04:01.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:01.282 06:02:20 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:01.282 06:02:20 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:01.282 06:02:20 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:01.282 06:02:20 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.282 06:02:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.283 06:02:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.283 06:02:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.283 06:02:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.283 06:02:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:01.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.283 --rc genhtml_branch_coverage=1 00:04:01.283 --rc genhtml_function_coverage=1 00:04:01.283 --rc genhtml_legend=1 00:04:01.283 --rc geninfo_all_blocks=1 00:04:01.283 --rc geninfo_unexecuted_blocks=1 00:04:01.283 00:04:01.283 ' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:01.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.283 --rc genhtml_branch_coverage=1 00:04:01.283 --rc genhtml_function_coverage=1 00:04:01.283 --rc genhtml_legend=1 00:04:01.283 --rc geninfo_all_blocks=1 00:04:01.283 --rc geninfo_unexecuted_blocks=1 00:04:01.283 00:04:01.283 ' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:01.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.283 --rc genhtml_branch_coverage=1 00:04:01.283 --rc genhtml_function_coverage=1 00:04:01.283 --rc genhtml_legend=1 00:04:01.283 --rc geninfo_all_blocks=1 00:04:01.283 --rc geninfo_unexecuted_blocks=1 00:04:01.283 00:04:01.283 ' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:01.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.283 --rc genhtml_branch_coverage=1 00:04:01.283 --rc genhtml_function_coverage=1 00:04:01.283 --rc genhtml_legend=1 00:04:01.283 --rc geninfo_all_blocks=1 00:04:01.283 --rc geninfo_unexecuted_blocks=1 00:04:01.283 00:04:01.283 ' 00:04:01.283 06:02:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:01.283 06:02:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57954 00:04:01.283 06:02:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57954 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57954 ']' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:01.283 06:02:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.283 06:02:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.540 [2024-11-20 06:02:20.928507] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:01.540 [2024-11-20 06:02:20.928626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:04:01.540 [2024-11-20 06:02:21.088807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.797 [2024-11-20 06:02:21.190873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.361 06:02:21 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:02.361 06:02:21 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:02.361 06:02:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:02.628 06:02:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57954 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57954 ']' 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57954 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57954 00:04:02.628 killing process with pid 57954 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57954' 00:04:02.628 06:02:22 alias_rpc -- common/autotest_common.sh@971 -- # kill 57954 00:04:02.629 06:02:22 alias_rpc -- common/autotest_common.sh@976 -- # wait 57954 00:04:04.033 ************************************ 00:04:04.033 END TEST alias_rpc 00:04:04.033 ************************************ 00:04:04.033 00:04:04.033 real 0m2.869s 00:04:04.033 user 0m2.966s 00:04:04.033 sys 0m0.410s 00:04:04.033 06:02:23 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.033 06:02:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.033 06:02:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:04.033 06:02:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:04.033 06:02:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.033 06:02:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.033 06:02:23 -- common/autotest_common.sh@10 -- # set +x 00:04:04.033 ************************************ 00:04:04.033 START TEST spdkcli_tcp 00:04:04.033 ************************************ 00:04:04.033 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:04.290 * Looking for test storage... 00:04:04.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.290 06:02:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.290 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:04.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.290 --rc genhtml_branch_coverage=1 00:04:04.291 --rc genhtml_function_coverage=1 00:04:04.291 --rc genhtml_legend=1 00:04:04.291 --rc geninfo_all_blocks=1 00:04:04.291 --rc geninfo_unexecuted_blocks=1 00:04:04.291 00:04:04.291 ' 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.291 --rc genhtml_branch_coverage=1 00:04:04.291 --rc genhtml_function_coverage=1 00:04:04.291 --rc genhtml_legend=1 00:04:04.291 --rc geninfo_all_blocks=1 00:04:04.291 --rc geninfo_unexecuted_blocks=1 00:04:04.291 00:04:04.291 ' 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.291 --rc genhtml_branch_coverage=1 00:04:04.291 --rc genhtml_function_coverage=1 00:04:04.291 --rc genhtml_legend=1 00:04:04.291 --rc geninfo_all_blocks=1 00:04:04.291 --rc geninfo_unexecuted_blocks=1 00:04:04.291 00:04:04.291 ' 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.291 --rc genhtml_branch_coverage=1 00:04:04.291 --rc genhtml_function_coverage=1 00:04:04.291 --rc genhtml_legend=1 00:04:04.291 --rc geninfo_all_blocks=1 00:04:04.291 --rc geninfo_unexecuted_blocks=1 00:04:04.291 00:04:04.291 ' 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58049 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58049 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58049 ']' 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:04.291 06:02:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:04.291 06:02:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.291 [2024-11-20 06:02:23.843715] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:04.291 [2024-11-20 06:02:23.843837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58049 ] 00:04:04.548 [2024-11-20 06:02:24.001461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.548 [2024-11-20 06:02:24.106738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.548 [2024-11-20 06:02:24.106774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.113 06:02:24 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:05.113 06:02:24 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:05.113 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58061 00:04:05.113 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:05.113 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:05.371 [ 00:04:05.371 "bdev_malloc_delete", 00:04:05.371 "bdev_malloc_create", 00:04:05.371 "bdev_null_resize", 00:04:05.371 "bdev_null_delete", 00:04:05.371 "bdev_null_create", 00:04:05.371 "bdev_nvme_cuse_unregister", 00:04:05.371 "bdev_nvme_cuse_register", 00:04:05.371 "bdev_opal_new_user", 00:04:05.371 "bdev_opal_set_lock_state", 00:04:05.371 "bdev_opal_delete", 00:04:05.371 "bdev_opal_get_info", 00:04:05.371 "bdev_opal_create", 00:04:05.371 "bdev_nvme_opal_revert", 00:04:05.371 "bdev_nvme_opal_init", 00:04:05.371 "bdev_nvme_send_cmd", 00:04:05.371 "bdev_nvme_set_keys", 00:04:05.371 "bdev_nvme_get_path_iostat", 00:04:05.371 "bdev_nvme_get_mdns_discovery_info", 00:04:05.371 "bdev_nvme_stop_mdns_discovery", 00:04:05.371 "bdev_nvme_start_mdns_discovery", 00:04:05.371 "bdev_nvme_set_multipath_policy", 00:04:05.371 "bdev_nvme_set_preferred_path", 00:04:05.371 "bdev_nvme_get_io_paths", 00:04:05.371 "bdev_nvme_remove_error_injection", 00:04:05.371 "bdev_nvme_add_error_injection", 00:04:05.371 "bdev_nvme_get_discovery_info", 00:04:05.371 "bdev_nvme_stop_discovery", 00:04:05.371 "bdev_nvme_start_discovery", 00:04:05.371 "bdev_nvme_get_controller_health_info", 00:04:05.371 "bdev_nvme_disable_controller", 00:04:05.371 "bdev_nvme_enable_controller", 00:04:05.371 "bdev_nvme_reset_controller", 00:04:05.371 "bdev_nvme_get_transport_statistics", 00:04:05.371 "bdev_nvme_apply_firmware", 00:04:05.371 "bdev_nvme_detach_controller", 00:04:05.371 "bdev_nvme_get_controllers", 00:04:05.371 "bdev_nvme_attach_controller", 00:04:05.371 "bdev_nvme_set_hotplug", 00:04:05.371 "bdev_nvme_set_options", 00:04:05.371 "bdev_passthru_delete", 00:04:05.371 "bdev_passthru_create", 00:04:05.371 "bdev_lvol_set_parent_bdev", 00:04:05.371 "bdev_lvol_set_parent", 00:04:05.371 "bdev_lvol_check_shallow_copy", 00:04:05.371 "bdev_lvol_start_shallow_copy", 00:04:05.371 "bdev_lvol_grow_lvstore", 00:04:05.371 "bdev_lvol_get_lvols", 00:04:05.371 "bdev_lvol_get_lvstores", 00:04:05.371 "bdev_lvol_delete", 00:04:05.371 "bdev_lvol_set_read_only", 00:04:05.371 "bdev_lvol_resize", 00:04:05.371 "bdev_lvol_decouple_parent", 00:04:05.371 "bdev_lvol_inflate", 00:04:05.371 "bdev_lvol_rename", 00:04:05.371 "bdev_lvol_clone_bdev", 00:04:05.371 "bdev_lvol_clone", 00:04:05.371 "bdev_lvol_snapshot", 00:04:05.371 "bdev_lvol_create", 00:04:05.371 "bdev_lvol_delete_lvstore", 00:04:05.371 "bdev_lvol_rename_lvstore", 00:04:05.371 "bdev_lvol_create_lvstore", 00:04:05.371 "bdev_raid_set_options", 00:04:05.371 "bdev_raid_remove_base_bdev", 00:04:05.371 "bdev_raid_add_base_bdev", 00:04:05.371 "bdev_raid_delete", 00:04:05.371 "bdev_raid_create", 00:04:05.371 "bdev_raid_get_bdevs", 00:04:05.371 "bdev_error_inject_error", 00:04:05.371 "bdev_error_delete", 00:04:05.371 "bdev_error_create", 00:04:05.371 "bdev_split_delete", 00:04:05.371 "bdev_split_create", 00:04:05.371 "bdev_delay_delete", 00:04:05.371 "bdev_delay_create", 00:04:05.371 "bdev_delay_update_latency", 00:04:05.371 "bdev_zone_block_delete", 00:04:05.371 "bdev_zone_block_create", 00:04:05.371 "blobfs_create", 00:04:05.371 "blobfs_detect", 00:04:05.371 "blobfs_set_cache_size", 00:04:05.371 "bdev_xnvme_delete", 00:04:05.371 "bdev_xnvme_create", 00:04:05.371 "bdev_aio_delete", 00:04:05.371 "bdev_aio_rescan", 00:04:05.371 "bdev_aio_create", 00:04:05.371 "bdev_ftl_set_property", 00:04:05.371 "bdev_ftl_get_properties", 00:04:05.371 "bdev_ftl_get_stats", 00:04:05.371 "bdev_ftl_unmap", 00:04:05.371 "bdev_ftl_unload", 00:04:05.371 "bdev_ftl_delete", 00:04:05.371 "bdev_ftl_load", 00:04:05.371 "bdev_ftl_create", 00:04:05.372 "bdev_virtio_attach_controller", 00:04:05.372 "bdev_virtio_scsi_get_devices", 00:04:05.372 "bdev_virtio_detach_controller", 00:04:05.372 "bdev_virtio_blk_set_hotplug", 00:04:05.372 "bdev_iscsi_delete", 00:04:05.372 "bdev_iscsi_create", 00:04:05.372 "bdev_iscsi_set_options", 00:04:05.372 "accel_error_inject_error", 00:04:05.372 "ioat_scan_accel_module", 00:04:05.372 "dsa_scan_accel_module", 00:04:05.372 "iaa_scan_accel_module", 00:04:05.372 "keyring_file_remove_key", 00:04:05.372 "keyring_file_add_key", 00:04:05.372 "keyring_linux_set_options", 00:04:05.372 "fsdev_aio_delete", 00:04:05.372 "fsdev_aio_create", 00:04:05.372 "iscsi_get_histogram", 00:04:05.372 "iscsi_enable_histogram", 00:04:05.372 "iscsi_set_options", 00:04:05.372 "iscsi_get_auth_groups", 00:04:05.372 "iscsi_auth_group_remove_secret", 00:04:05.372 "iscsi_auth_group_add_secret", 00:04:05.372 "iscsi_delete_auth_group", 00:04:05.372 "iscsi_create_auth_group", 00:04:05.372 "iscsi_set_discovery_auth", 00:04:05.372 "iscsi_get_options", 00:04:05.372 "iscsi_target_node_request_logout", 00:04:05.372 "iscsi_target_node_set_redirect", 00:04:05.372 "iscsi_target_node_set_auth", 00:04:05.372 "iscsi_target_node_add_lun", 00:04:05.372 "iscsi_get_stats", 00:04:05.372 "iscsi_get_connections", 00:04:05.372 "iscsi_portal_group_set_auth", 00:04:05.372 "iscsi_start_portal_group", 00:04:05.372 "iscsi_delete_portal_group", 00:04:05.372 "iscsi_create_portal_group", 00:04:05.372 "iscsi_get_portal_groups", 00:04:05.372 "iscsi_delete_target_node", 00:04:05.372 "iscsi_target_node_remove_pg_ig_maps", 00:04:05.372 "iscsi_target_node_add_pg_ig_maps", 00:04:05.372 "iscsi_create_target_node", 00:04:05.372 "iscsi_get_target_nodes", 00:04:05.372 "iscsi_delete_initiator_group", 00:04:05.372 "iscsi_initiator_group_remove_initiators", 00:04:05.372 "iscsi_initiator_group_add_initiators", 00:04:05.372 "iscsi_create_initiator_group", 00:04:05.372 "iscsi_get_initiator_groups", 00:04:05.372 "nvmf_set_crdt", 00:04:05.372 "nvmf_set_config", 00:04:05.372 "nvmf_set_max_subsystems", 00:04:05.372 "nvmf_stop_mdns_prr", 00:04:05.372 "nvmf_publish_mdns_prr", 00:04:05.372 "nvmf_subsystem_get_listeners", 00:04:05.372 "nvmf_subsystem_get_qpairs", 00:04:05.372 "nvmf_subsystem_get_controllers", 00:04:05.372 "nvmf_get_stats", 00:04:05.372 "nvmf_get_transports", 00:04:05.372 "nvmf_create_transport", 00:04:05.372 "nvmf_get_targets", 00:04:05.372 "nvmf_delete_target", 00:04:05.372 "nvmf_create_target", 00:04:05.372 "nvmf_subsystem_allow_any_host", 00:04:05.372 "nvmf_subsystem_set_keys", 00:04:05.372 "nvmf_subsystem_remove_host", 00:04:05.372 "nvmf_subsystem_add_host", 00:04:05.372 "nvmf_ns_remove_host", 00:04:05.372 "nvmf_ns_add_host", 00:04:05.372 "nvmf_subsystem_remove_ns", 00:04:05.372 "nvmf_subsystem_set_ns_ana_group", 00:04:05.372 "nvmf_subsystem_add_ns", 00:04:05.372 "nvmf_subsystem_listener_set_ana_state", 00:04:05.372 "nvmf_discovery_get_referrals", 00:04:05.372 "nvmf_discovery_remove_referral", 00:04:05.372 "nvmf_discovery_add_referral", 00:04:05.372 "nvmf_subsystem_remove_listener", 00:04:05.372 "nvmf_subsystem_add_listener", 00:04:05.372 "nvmf_delete_subsystem", 00:04:05.372 "nvmf_create_subsystem", 00:04:05.372 "nvmf_get_subsystems", 00:04:05.372 "env_dpdk_get_mem_stats", 00:04:05.372 "nbd_get_disks", 00:04:05.372 "nbd_stop_disk", 00:04:05.372 "nbd_start_disk", 00:04:05.372 "ublk_recover_disk", 00:04:05.372 "ublk_get_disks", 00:04:05.372 "ublk_stop_disk", 00:04:05.372 "ublk_start_disk", 00:04:05.372 "ublk_destroy_target", 00:04:05.372 "ublk_create_target", 00:04:05.372 "virtio_blk_create_transport", 00:04:05.372 "virtio_blk_get_transports", 00:04:05.372 "vhost_controller_set_coalescing", 00:04:05.372 "vhost_get_controllers", 00:04:05.372 "vhost_delete_controller", 00:04:05.372 "vhost_create_blk_controller", 00:04:05.372 "vhost_scsi_controller_remove_target", 00:04:05.372 "vhost_scsi_controller_add_target", 00:04:05.372 "vhost_start_scsi_controller", 00:04:05.372 "vhost_create_scsi_controller", 00:04:05.372 "thread_set_cpumask", 00:04:05.372 "scheduler_set_options", 00:04:05.372 "framework_get_governor", 00:04:05.372 "framework_get_scheduler", 00:04:05.372 "framework_set_scheduler", 00:04:05.372 "framework_get_reactors", 00:04:05.372 "thread_get_io_channels", 00:04:05.372 "thread_get_pollers", 00:04:05.372 "thread_get_stats", 00:04:05.372 "framework_monitor_context_switch", 00:04:05.372 "spdk_kill_instance", 00:04:05.372 "log_enable_timestamps", 00:04:05.372 "log_get_flags", 00:04:05.372 "log_clear_flag", 00:04:05.372 "log_set_flag", 00:04:05.372 "log_get_level", 00:04:05.372 "log_set_level", 00:04:05.372 "log_get_print_level", 00:04:05.372 "log_set_print_level", 00:04:05.372 "framework_enable_cpumask_locks", 00:04:05.372 "framework_disable_cpumask_locks", 00:04:05.372 "framework_wait_init", 00:04:05.372 "framework_start_init", 00:04:05.372 "scsi_get_devices", 00:04:05.372 "bdev_get_histogram", 00:04:05.372 "bdev_enable_histogram", 00:04:05.372 "bdev_set_qos_limit", 00:04:05.372 "bdev_set_qd_sampling_period", 00:04:05.372 "bdev_get_bdevs", 00:04:05.372 "bdev_reset_iostat", 00:04:05.372 "bdev_get_iostat", 00:04:05.372 "bdev_examine", 00:04:05.372 "bdev_wait_for_examine", 00:04:05.372 "bdev_set_options", 00:04:05.372 "accel_get_stats", 00:04:05.372 "accel_set_options", 00:04:05.372 "accel_set_driver", 00:04:05.372 "accel_crypto_key_destroy", 00:04:05.372 "accel_crypto_keys_get", 00:04:05.372 "accel_crypto_key_create", 00:04:05.372 "accel_assign_opc", 00:04:05.372 "accel_get_module_info", 00:04:05.372 "accel_get_opc_assignments", 00:04:05.372 "vmd_rescan", 00:04:05.372 "vmd_remove_device", 00:04:05.372 "vmd_enable", 00:04:05.372 "sock_get_default_impl", 00:04:05.372 "sock_set_default_impl", 00:04:05.372 "sock_impl_set_options", 00:04:05.372 "sock_impl_get_options", 00:04:05.372 "iobuf_get_stats", 00:04:05.372 "iobuf_set_options", 00:04:05.372 "keyring_get_keys", 00:04:05.372 "framework_get_pci_devices", 00:04:05.372 "framework_get_config", 00:04:05.372 "framework_get_subsystems", 00:04:05.372 "fsdev_set_opts", 00:04:05.372 "fsdev_get_opts", 00:04:05.372 "trace_get_info", 00:04:05.372 "trace_get_tpoint_group_mask", 00:04:05.372 "trace_disable_tpoint_group", 00:04:05.372 "trace_enable_tpoint_group", 00:04:05.372 "trace_clear_tpoint_mask", 00:04:05.372 "trace_set_tpoint_mask", 00:04:05.372 "notify_get_notifications", 00:04:05.372 "notify_get_types", 00:04:05.372 "spdk_get_version", 00:04:05.372 "rpc_get_methods" 00:04:05.372 ] 00:04:05.372 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.372 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:05.372 06:02:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58049 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58049 ']' 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58049 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58049 00:04:05.372 killing process with pid 58049 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58049' 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58049 00:04:05.372 06:02:24 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58049 00:04:07.275 ************************************ 00:04:07.275 END TEST spdkcli_tcp 00:04:07.275 ************************************ 00:04:07.275 00:04:07.275 real 0m2.861s 00:04:07.275 user 0m5.102s 00:04:07.275 sys 0m0.466s 00:04:07.275 06:02:26 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.275 06:02:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.275 06:02:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.275 06:02:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.275 06:02:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.275 06:02:26 -- common/autotest_common.sh@10 -- # set +x 00:04:07.275 ************************************ 00:04:07.275 START TEST dpdk_mem_utility 00:04:07.275 ************************************ 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.275 * Looking for test storage... 00:04:07.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.275 06:02:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.275 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.275 --rc genhtml_branch_coverage=1 00:04:07.275 --rc genhtml_function_coverage=1 00:04:07.275 --rc genhtml_legend=1 00:04:07.275 --rc geninfo_all_blocks=1 00:04:07.276 --rc geninfo_unexecuted_blocks=1 00:04:07.276 00:04:07.276 ' 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.276 --rc genhtml_branch_coverage=1 00:04:07.276 --rc genhtml_function_coverage=1 00:04:07.276 --rc genhtml_legend=1 00:04:07.276 --rc geninfo_all_blocks=1 00:04:07.276 --rc geninfo_unexecuted_blocks=1 00:04:07.276 00:04:07.276 ' 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.276 --rc genhtml_branch_coverage=1 00:04:07.276 --rc genhtml_function_coverage=1 00:04:07.276 --rc genhtml_legend=1 00:04:07.276 --rc geninfo_all_blocks=1 00:04:07.276 --rc geninfo_unexecuted_blocks=1 00:04:07.276 00:04:07.276 ' 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.276 --rc genhtml_branch_coverage=1 00:04:07.276 --rc genhtml_function_coverage=1 00:04:07.276 --rc genhtml_legend=1 00:04:07.276 --rc geninfo_all_blocks=1 00:04:07.276 --rc geninfo_unexecuted_blocks=1 00:04:07.276 00:04:07.276 ' 00:04:07.276 06:02:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:07.276 06:02:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58155 00:04:07.276 06:02:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58155 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58155 ']' 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.276 06:02:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:07.276 06:02:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.276 [2024-11-20 06:02:26.745595] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:07.276 [2024-11-20 06:02:26.745895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58155 ] 00:04:07.534 [2024-11-20 06:02:26.915782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.534 [2024-11-20 06:02:27.016575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.101 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.101 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:08.101 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:08.101 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:08.101 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.101 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:08.101 { 00:04:08.101 "filename": "/tmp/spdk_mem_dump.txt" 00:04:08.101 } 00:04:08.101 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.101 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:08.101 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:08.101 1 heaps totaling size 816.000000 MiB 00:04:08.101 size: 816.000000 MiB heap id: 0 00:04:08.101 end heaps---------- 00:04:08.101 9 mempools totaling size 595.772034 MiB 00:04:08.101 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:08.101 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:08.101 size: 92.545471 MiB name: bdev_io_58155 00:04:08.101 size: 50.003479 MiB name: msgpool_58155 00:04:08.101 size: 36.509338 MiB name: fsdev_io_58155 00:04:08.101 size: 21.763794 MiB name: PDU_Pool 00:04:08.101 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:08.101 size: 4.133484 MiB name: evtpool_58155 00:04:08.101 size: 0.026123 MiB name: Session_Pool 00:04:08.101 end mempools------- 00:04:08.101 6 memzones totaling size 4.142822 MiB 00:04:08.101 size: 1.000366 MiB name: RG_ring_0_58155 00:04:08.101 size: 1.000366 MiB name: RG_ring_1_58155 00:04:08.101 size: 1.000366 MiB name: RG_ring_4_58155 00:04:08.101 size: 1.000366 MiB name: RG_ring_5_58155 00:04:08.101 size: 0.125366 MiB name: RG_ring_2_58155 00:04:08.101 size: 0.015991 MiB name: RG_ring_3_58155 00:04:08.101 end memzones------- 00:04:08.101 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:08.101 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:04:08.101 list of free elements. size: 16.789673 MiB 00:04:08.101 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:08.101 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:08.101 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:08.101 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:08.101 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:08.101 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:08.101 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:08.101 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:08.101 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:08.101 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:08.101 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:08.101 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:04:08.101 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:08.101 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:08.101 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:08.101 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:08.102 element at address: 0x200028000000 with size: 0.391663 MiB 00:04:08.102 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:08.102 list of standard malloc elements. size: 199.289429 MiB 00:04:08.102 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:08.102 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:08.102 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:08.102 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:08.102 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:08.102 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:08.102 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:08.102 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:08.102 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:08.102 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:08.102 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:08.102 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:08.102 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:08.102 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:08.103 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:08.103 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:08.103 element at address: 0x200028064540 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b200 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:08.103 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:08.104 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:08.104 list of memzone associated elements. size: 599.920898 MiB 00:04:08.104 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:08.104 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:08.104 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:08.104 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:08.104 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:08.104 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58155_0 00:04:08.104 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:08.104 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58155_0 00:04:08.104 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:08.104 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58155_0 00:04:08.104 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:08.104 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:08.104 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:08.104 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:08.104 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:08.104 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58155_0 00:04:08.104 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:08.104 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58155 00:04:08.104 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:08.104 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58155 00:04:08.104 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:08.104 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:08.104 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:08.104 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:08.104 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:08.104 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:08.104 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:08.104 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:08.104 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:08.104 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58155 00:04:08.104 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:08.104 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58155 00:04:08.104 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:08.104 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58155 00:04:08.104 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:08.104 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58155 00:04:08.104 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:08.104 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58155 00:04:08.104 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:08.104 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58155 00:04:08.104 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:08.104 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:08.104 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:08.104 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:08.104 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:08.104 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:08.104 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:08.104 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58155 00:04:08.104 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:08.104 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58155 00:04:08.104 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:08.104 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:08.104 element at address: 0x200028064640 with size: 0.023804 MiB 00:04:08.104 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:08.104 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:08.104 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58155 00:04:08.104 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:04:08.104 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:08.104 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:08.104 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58155 00:04:08.104 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:08.104 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58155 00:04:08.104 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:08.104 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58155 00:04:08.104 element at address: 0x20002806b300 with size: 0.000366 MiB 00:04:08.104 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:08.104 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:08.104 06:02:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58155 00:04:08.104 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58155 ']' 00:04:08.104 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58155 00:04:08.104 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:08.104 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:08.104 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58155 00:04:08.362 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:08.362 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:08.362 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58155' 00:04:08.362 killing process with pid 58155 00:04:08.362 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58155 00:04:08.362 06:02:27 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58155 00:04:09.731 00:04:09.731 real 0m2.710s 00:04:09.731 user 0m2.712s 00:04:09.731 sys 0m0.388s 00:04:09.731 06:02:29 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.731 06:02:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.731 ************************************ 00:04:09.731 END TEST dpdk_mem_utility 00:04:09.731 ************************************ 00:04:09.731 06:02:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:09.731 06:02:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.732 06:02:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.732 06:02:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 ************************************ 00:04:09.732 START TEST event 00:04:09.732 ************************************ 00:04:09.732 06:02:29 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:09.732 * Looking for test storage... 00:04:09.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:09.732 06:02:29 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:09.732 06:02:29 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:09.732 06:02:29 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:09.988 06:02:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.988 06:02:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.988 06:02:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.988 06:02:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.988 06:02:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.988 06:02:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.988 06:02:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.988 06:02:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.988 06:02:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.988 06:02:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.988 06:02:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.988 06:02:29 event -- scripts/common.sh@344 -- # case "$op" in 00:04:09.988 06:02:29 event -- scripts/common.sh@345 -- # : 1 00:04:09.988 06:02:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.988 06:02:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.988 06:02:29 event -- scripts/common.sh@365 -- # decimal 1 00:04:09.988 06:02:29 event -- scripts/common.sh@353 -- # local d=1 00:04:09.988 06:02:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.988 06:02:29 event -- scripts/common.sh@355 -- # echo 1 00:04:09.988 06:02:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.988 06:02:29 event -- scripts/common.sh@366 -- # decimal 2 00:04:09.988 06:02:29 event -- scripts/common.sh@353 -- # local d=2 00:04:09.988 06:02:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.988 06:02:29 event -- scripts/common.sh@355 -- # echo 2 00:04:09.988 06:02:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.988 06:02:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.988 06:02:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.988 06:02:29 event -- scripts/common.sh@368 -- # return 0 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:09.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.988 --rc genhtml_branch_coverage=1 00:04:09.988 --rc genhtml_function_coverage=1 00:04:09.988 --rc genhtml_legend=1 00:04:09.988 --rc geninfo_all_blocks=1 00:04:09.988 --rc geninfo_unexecuted_blocks=1 00:04:09.988 00:04:09.988 ' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:09.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.988 --rc genhtml_branch_coverage=1 00:04:09.988 --rc genhtml_function_coverage=1 00:04:09.988 --rc genhtml_legend=1 00:04:09.988 --rc geninfo_all_blocks=1 00:04:09.988 --rc geninfo_unexecuted_blocks=1 00:04:09.988 00:04:09.988 ' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:09.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.988 --rc genhtml_branch_coverage=1 00:04:09.988 --rc genhtml_function_coverage=1 00:04:09.988 --rc genhtml_legend=1 00:04:09.988 --rc geninfo_all_blocks=1 00:04:09.988 --rc geninfo_unexecuted_blocks=1 00:04:09.988 00:04:09.988 ' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:09.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.988 --rc genhtml_branch_coverage=1 00:04:09.988 --rc genhtml_function_coverage=1 00:04:09.988 --rc genhtml_legend=1 00:04:09.988 --rc geninfo_all_blocks=1 00:04:09.988 --rc geninfo_unexecuted_blocks=1 00:04:09.988 00:04:09.988 ' 00:04:09.988 06:02:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:09.988 06:02:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:09.988 06:02:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:09.988 06:02:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.989 06:02:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.989 ************************************ 00:04:09.989 START TEST event_perf 00:04:09.989 ************************************ 00:04:09.989 06:02:29 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:09.989 Running I/O for 1 seconds...[2024-11-20 06:02:29.446224] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:09.989 [2024-11-20 06:02:29.446335] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:04:09.989 [2024-11-20 06:02:29.604897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.246 [2024-11-20 06:02:29.712060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.246 [2024-11-20 06:02:29.712168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.246 [2024-11-20 06:02:29.712677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.246 [2024-11-20 06:02:29.712697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.614 Running I/O for 1 seconds... 00:04:11.614 lcore 0: 198441 00:04:11.614 lcore 1: 198441 00:04:11.614 lcore 2: 198439 00:04:11.614 lcore 3: 198439 00:04:11.614 done. 00:04:11.614 00:04:11.614 real 0m1.464s 00:04:11.614 user 0m4.259s 00:04:11.614 sys 0m0.084s 00:04:11.614 06:02:30 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.614 06:02:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.614 ************************************ 00:04:11.614 END TEST event_perf 00:04:11.614 ************************************ 00:04:11.614 06:02:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:11.614 06:02:30 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:11.614 06:02:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.614 06:02:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.614 ************************************ 00:04:11.614 START TEST event_reactor 00:04:11.614 ************************************ 00:04:11.614 06:02:30 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:11.614 [2024-11-20 06:02:30.946171] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:11.614 [2024-11-20 06:02:30.946286] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:04:11.614 [2024-11-20 06:02:31.106116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.614 [2024-11-20 06:02:31.208292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.986 test_start 00:04:12.986 oneshot 00:04:12.986 tick 100 00:04:12.986 tick 100 00:04:12.986 tick 250 00:04:12.986 tick 100 00:04:12.986 tick 100 00:04:12.986 tick 100 00:04:12.986 tick 250 00:04:12.986 tick 500 00:04:12.986 tick 100 00:04:12.986 tick 100 00:04:12.986 tick 250 00:04:12.986 tick 100 00:04:12.986 tick 100 00:04:12.986 test_end 00:04:12.986 ************************************ 00:04:12.986 END TEST event_reactor 00:04:12.986 ************************************ 00:04:12.986 00:04:12.986 real 0m1.447s 00:04:12.986 user 0m1.275s 00:04:12.986 sys 0m0.064s 00:04:12.986 06:02:32 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.986 06:02:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:12.986 06:02:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.986 06:02:32 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:12.986 06:02:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.986 06:02:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.986 ************************************ 00:04:12.986 START TEST event_reactor_perf 00:04:12.986 ************************************ 00:04:12.986 06:02:32 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:12.986 [2024-11-20 06:02:32.430216] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:12.986 [2024-11-20 06:02:32.430529] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58323 ] 00:04:12.986 [2024-11-20 06:02:32.592058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.295 [2024-11-20 06:02:32.688270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.324 test_start 00:04:14.324 test_end 00:04:14.324 Performance: 339279 events per second 00:04:14.324 ************************************ 00:04:14.324 END TEST event_reactor_perf 00:04:14.324 ************************************ 00:04:14.324 00:04:14.324 real 0m1.411s 00:04:14.324 user 0m1.236s 00:04:14.324 sys 0m0.065s 00:04:14.324 06:02:33 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.324 06:02:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.324 06:02:33 event -- event/event.sh@49 -- # uname -s 00:04:14.324 06:02:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.324 06:02:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:14.324 06:02:33 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.324 06:02:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.324 06:02:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.324 ************************************ 00:04:14.324 START TEST event_scheduler 00:04:14.324 ************************************ 00:04:14.324 06:02:33 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:14.324 * Looking for test storage... 00:04:14.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:14.324 06:02:33 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.324 06:02:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.324 06:02:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:14.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.582 06:02:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.582 --rc genhtml_branch_coverage=1 00:04:14.582 --rc genhtml_function_coverage=1 00:04:14.582 --rc genhtml_legend=1 00:04:14.582 --rc geninfo_all_blocks=1 00:04:14.582 --rc geninfo_unexecuted_blocks=1 00:04:14.582 00:04:14.582 ' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.582 --rc genhtml_branch_coverage=1 00:04:14.582 --rc genhtml_function_coverage=1 00:04:14.582 --rc genhtml_legend=1 00:04:14.582 --rc geninfo_all_blocks=1 00:04:14.582 --rc geninfo_unexecuted_blocks=1 00:04:14.582 00:04:14.582 ' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.582 --rc genhtml_branch_coverage=1 00:04:14.582 --rc genhtml_function_coverage=1 00:04:14.582 --rc genhtml_legend=1 00:04:14.582 --rc geninfo_all_blocks=1 00:04:14.582 --rc geninfo_unexecuted_blocks=1 00:04:14.582 00:04:14.582 ' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.582 --rc genhtml_branch_coverage=1 00:04:14.582 --rc genhtml_function_coverage=1 00:04:14.582 --rc genhtml_legend=1 00:04:14.582 --rc geninfo_all_blocks=1 00:04:14.582 --rc geninfo_unexecuted_blocks=1 00:04:14.582 00:04:14.582 ' 00:04:14.582 06:02:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:14.582 06:02:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58399 00:04:14.582 06:02:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.582 06:02:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58399 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58399 ']' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.582 06:02:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:14.582 06:02:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.582 [2024-11-20 06:02:34.036628] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:14.582 [2024-11-20 06:02:34.037362] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:04:14.582 [2024-11-20 06:02:34.194037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.839 [2024-11-20 06:02:34.301233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.840 [2024-11-20 06:02:34.301616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.840 [2024-11-20 06:02:34.301743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.840 [2024-11-20 06:02:34.301743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:15.404 06:02:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.404 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.404 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.404 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.404 POWER: Cannot set governor of lcore 0 to performance 00:04:15.404 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.404 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.404 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.404 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.404 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:15.404 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:15.404 POWER: Unable to set Power Management Environment for lcore 0 00:04:15.404 [2024-11-20 06:02:34.887464] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:15.404 [2024-11-20 06:02:34.887487] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:15.404 [2024-11-20 06:02:34.887508] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.404 [2024-11-20 06:02:34.887525] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.404 [2024-11-20 06:02:34.887534] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.404 [2024-11-20 06:02:34.887544] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.404 06:02:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.404 06:02:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 [2024-11-20 06:02:35.112231] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.662 06:02:35 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.662 06:02:35 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.662 06:02:35 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 ************************************ 00:04:15.662 START TEST scheduler_create_thread 00:04:15.662 ************************************ 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 2 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 3 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 4 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 5 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 6 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 7 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 8 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 9 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 10 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.662 06:02:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.036 06:02:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.036 ************************************ 00:04:17.036 END TEST scheduler_create_thread 00:04:17.036 ************************************ 00:04:17.036 00:04:17.036 real 0m1.173s 00:04:17.036 user 0m0.014s 00:04:17.036 sys 0m0.005s 00:04:17.036 06:02:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.036 06:02:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.036 06:02:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:17.036 06:02:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58399 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58399 ']' 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58399 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58399 00:04:17.036 killing process with pid 58399 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58399' 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58399 00:04:17.036 06:02:36 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58399 00:04:17.294 [2024-11-20 06:02:36.773676] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.226 ************************************ 00:04:18.226 END TEST event_scheduler 00:04:18.226 ************************************ 00:04:18.226 00:04:18.226 real 0m3.667s 00:04:18.226 user 0m6.033s 00:04:18.226 sys 0m0.330s 00:04:18.226 06:02:37 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.226 06:02:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.226 06:02:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.226 06:02:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.226 06:02:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.226 06:02:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.226 06:02:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.226 ************************************ 00:04:18.226 START TEST app_repeat 00:04:18.226 ************************************ 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.226 Process app_repeat pid: 58483 00:04:18.226 spdk_app_start Round 0 00:04:18.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58483 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58483' 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58483 /var/tmp/spdk-nbd.sock 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58483 ']' 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.226 06:02:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.226 06:02:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.226 [2024-11-20 06:02:37.594639] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:18.226 [2024-11-20 06:02:37.594774] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58483 ] 00:04:18.226 [2024-11-20 06:02:37.749898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.226 [2024-11-20 06:02:37.837164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.226 [2024-11-20 06:02:37.837307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.158 06:02:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.158 06:02:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:19.158 06:02:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.158 Malloc0 00:04:19.158 06:02:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.416 Malloc1 00:04:19.416 06:02:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.416 06:02:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.417 06:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.417 06:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.417 06:02:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.674 /dev/nbd0 00:04:19.674 06:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.674 06:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.674 1+0 records in 00:04:19.674 1+0 records out 00:04:19.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423233 s, 9.7 MB/s 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:19.674 06:02:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:19.674 06:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.674 06:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.674 06:02:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.931 /dev/nbd1 00:04:19.931 06:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.931 06:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:19.931 06:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.932 1+0 records in 00:04:19.932 1+0 records out 00:04:19.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178032 s, 23.0 MB/s 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:19.932 06:02:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:19.932 06:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.932 06:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.932 06:02:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.932 06:02:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.932 06:02:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.189 { 00:04:20.189 "nbd_device": "/dev/nbd0", 00:04:20.189 "bdev_name": "Malloc0" 00:04:20.189 }, 00:04:20.189 { 00:04:20.189 "nbd_device": "/dev/nbd1", 00:04:20.189 "bdev_name": "Malloc1" 00:04:20.189 } 00:04:20.189 ]' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.189 { 00:04:20.189 "nbd_device": "/dev/nbd0", 00:04:20.189 "bdev_name": "Malloc0" 00:04:20.189 }, 00:04:20.189 { 00:04:20.189 "nbd_device": "/dev/nbd1", 00:04:20.189 "bdev_name": "Malloc1" 00:04:20.189 } 00:04:20.189 ]' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.189 /dev/nbd1' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.189 /dev/nbd1' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.189 256+0 records in 00:04:20.189 256+0 records out 00:04:20.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656367 s, 160 MB/s 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.189 256+0 records in 00:04:20.189 256+0 records out 00:04:20.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180362 s, 58.1 MB/s 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.189 256+0 records in 00:04:20.189 256+0 records out 00:04:20.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171205 s, 61.2 MB/s 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.189 06:02:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.446 06:02:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.704 06:02:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.962 06:02:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.962 06:02:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.220 06:02:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.784 [2024-11-20 06:02:41.296593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.784 [2024-11-20 06:02:41.381279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.784 [2024-11-20 06:02:41.381303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.042 [2024-11-20 06:02:41.482244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:22.042 [2024-11-20 06:02:41.482317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.567 spdk_app_start Round 1 00:04:24.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.567 06:02:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.567 06:02:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.567 06:02:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58483 /var/tmp/spdk-nbd.sock 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58483 ']' 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.567 06:02:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:24.567 06:02:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.567 Malloc0 00:04:24.567 06:02:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.824 Malloc1 00:04:24.824 06:02:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.824 06:02:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.081 /dev/nbd0 00:04:25.081 06:02:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.081 06:02:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:25.081 06:02:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.082 1+0 records in 00:04:25.082 1+0 records out 00:04:25.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187969 s, 21.8 MB/s 00:04:25.082 06:02:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.082 06:02:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:25.082 06:02:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.082 06:02:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:25.082 06:02:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:25.082 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.082 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.082 06:02:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.338 /dev/nbd1 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.338 1+0 records in 00:04:25.338 1+0 records out 00:04:25.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184393 s, 22.2 MB/s 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:25.338 06:02:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.338 06:02:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.596 { 00:04:25.596 "nbd_device": "/dev/nbd0", 00:04:25.596 "bdev_name": "Malloc0" 00:04:25.596 }, 00:04:25.596 { 00:04:25.596 "nbd_device": "/dev/nbd1", 00:04:25.596 "bdev_name": "Malloc1" 00:04:25.596 } 00:04:25.596 ]' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.596 { 00:04:25.596 "nbd_device": "/dev/nbd0", 00:04:25.596 "bdev_name": "Malloc0" 00:04:25.596 }, 00:04:25.596 { 00:04:25.596 "nbd_device": "/dev/nbd1", 00:04:25.596 "bdev_name": "Malloc1" 00:04:25.596 } 00:04:25.596 ]' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.596 /dev/nbd1' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.596 /dev/nbd1' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.596 256+0 records in 00:04:25.596 256+0 records out 00:04:25.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721043 s, 145 MB/s 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.596 256+0 records in 00:04:25.596 256+0 records out 00:04:25.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170825 s, 61.4 MB/s 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.596 256+0 records in 00:04:25.596 256+0 records out 00:04:25.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162945 s, 64.4 MB/s 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.596 06:02:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.854 06:02:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.112 06:02:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.369 06:02:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.369 06:02:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.625 06:02:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.557 [2024-11-20 06:02:46.913971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.557 [2024-11-20 06:02:47.013714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.557 [2024-11-20 06:02:47.013752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.557 [2024-11-20 06:02:47.141065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.557 [2024-11-20 06:02:47.141117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.124 06:02:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.124 spdk_app_start Round 2 00:04:30.124 06:02:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:30.124 06:02:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58483 /var/tmp/spdk-nbd.sock 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58483 ']' 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.124 06:02:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:30.124 06:02:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.124 Malloc0 00:04:30.124 06:02:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.430 Malloc1 00:04:30.430 06:02:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.430 06:02:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.687 /dev/nbd0 00:04:30.687 06:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.687 06:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:30.687 06:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.688 1+0 records in 00:04:30.688 1+0 records out 00:04:30.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195387 s, 21.0 MB/s 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:30.688 06:02:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:30.688 06:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.688 06:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.688 06:02:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.947 /dev/nbd1 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.947 1+0 records in 00:04:30.947 1+0 records out 00:04:30.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295596 s, 13.9 MB/s 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:30.947 06:02:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.947 06:02:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.205 { 00:04:31.205 "nbd_device": "/dev/nbd0", 00:04:31.205 "bdev_name": "Malloc0" 00:04:31.205 }, 00:04:31.205 { 00:04:31.205 "nbd_device": "/dev/nbd1", 00:04:31.205 "bdev_name": "Malloc1" 00:04:31.205 } 00:04:31.205 ]' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.205 { 00:04:31.205 "nbd_device": "/dev/nbd0", 00:04:31.205 "bdev_name": "Malloc0" 00:04:31.205 }, 00:04:31.205 { 00:04:31.205 "nbd_device": "/dev/nbd1", 00:04:31.205 "bdev_name": "Malloc1" 00:04:31.205 } 00:04:31.205 ]' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.205 /dev/nbd1' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.205 /dev/nbd1' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.205 256+0 records in 00:04:31.205 256+0 records out 00:04:31.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476025 s, 220 MB/s 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.205 256+0 records in 00:04:31.205 256+0 records out 00:04:31.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169181 s, 62.0 MB/s 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.205 256+0 records in 00:04:31.205 256+0 records out 00:04:31.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205526 s, 51.0 MB/s 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.205 06:02:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.206 06:02:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.206 06:02:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.206 06:02:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.206 06:02:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.464 06:02:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.464 06:02:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.725 06:02:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.725 06:02:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.293 06:02:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.865 [2024-11-20 06:02:52.388129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.865 [2024-11-20 06:02:52.488299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.865 [2024-11-20 06:02:52.488435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.125 [2024-11-20 06:02:52.618461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.125 [2024-11-20 06:02:52.618547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.070 06:02:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58483 /var/tmp/spdk-nbd.sock 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58483 ']' 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.070 06:02:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:35.328 06:02:54 event.app_repeat -- event/event.sh@39 -- # killprocess 58483 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58483 ']' 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58483 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58483 00:04:35.328 killing process with pid 58483 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58483' 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58483 00:04:35.328 06:02:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58483 00:04:35.894 spdk_app_start is called in Round 0. 00:04:35.894 Shutdown signal received, stop current app iteration 00:04:35.894 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:04:35.894 spdk_app_start is called in Round 1. 00:04:35.894 Shutdown signal received, stop current app iteration 00:04:35.894 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:04:35.894 spdk_app_start is called in Round 2. 00:04:35.894 Shutdown signal received, stop current app iteration 00:04:35.894 Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 reinitialization... 00:04:35.894 spdk_app_start is called in Round 3. 00:04:35.894 Shutdown signal received, stop current app iteration 00:04:35.894 ************************************ 00:04:35.894 END TEST app_repeat 00:04:35.894 ************************************ 00:04:35.894 06:02:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.894 06:02:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.894 00:04:35.894 real 0m17.886s 00:04:35.894 user 0m39.165s 00:04:35.894 sys 0m2.109s 00:04:35.894 06:02:55 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.894 06:02:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.894 06:02:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.894 06:02:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.894 06:02:55 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.894 06:02:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.894 06:02:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.894 ************************************ 00:04:35.894 START TEST cpu_locks 00:04:35.894 ************************************ 00:04:35.894 06:02:55 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:36.154 * Looking for test storage... 00:04:36.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.154 06:02:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.154 --rc genhtml_branch_coverage=1 00:04:36.154 --rc genhtml_function_coverage=1 00:04:36.154 --rc genhtml_legend=1 00:04:36.154 --rc geninfo_all_blocks=1 00:04:36.154 --rc geninfo_unexecuted_blocks=1 00:04:36.154 00:04:36.154 ' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.154 --rc genhtml_branch_coverage=1 00:04:36.154 --rc genhtml_function_coverage=1 00:04:36.154 --rc genhtml_legend=1 00:04:36.154 --rc geninfo_all_blocks=1 00:04:36.154 --rc geninfo_unexecuted_blocks=1 00:04:36.154 00:04:36.154 ' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.154 --rc genhtml_branch_coverage=1 00:04:36.154 --rc genhtml_function_coverage=1 00:04:36.154 --rc genhtml_legend=1 00:04:36.154 --rc geninfo_all_blocks=1 00:04:36.154 --rc geninfo_unexecuted_blocks=1 00:04:36.154 00:04:36.154 ' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.154 --rc genhtml_branch_coverage=1 00:04:36.154 --rc genhtml_function_coverage=1 00:04:36.154 --rc genhtml_legend=1 00:04:36.154 --rc geninfo_all_blocks=1 00:04:36.154 --rc geninfo_unexecuted_blocks=1 00:04:36.154 00:04:36.154 ' 00:04:36.154 06:02:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.154 06:02:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.154 06:02:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.154 06:02:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.154 06:02:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.154 ************************************ 00:04:36.154 START TEST default_locks 00:04:36.154 ************************************ 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58919 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58919 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58919 ']' 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.154 06:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.154 [2024-11-20 06:02:55.747141] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:36.155 [2024-11-20 06:02:55.747263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:04:36.415 [2024-11-20 06:02:55.906616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.415 [2024-11-20 06:02:56.008673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.986 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.986 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:36.986 06:02:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58919 00:04:36.986 06:02:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58919 00:04:36.986 06:02:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58919 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58919 ']' 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58919 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58919 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.246 killing process with pid 58919 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58919' 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58919 00:04:37.246 06:02:56 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58919 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58919 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58919 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58919 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58919 ']' 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.155 ERROR: process (pid: 58919) is no longer running 00:04:39.155 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58919) - No such process 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.155 00:04:39.155 real 0m2.725s 00:04:39.155 user 0m2.744s 00:04:39.155 sys 0m0.444s 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.155 ************************************ 00:04:39.155 END TEST default_locks 00:04:39.155 ************************************ 00:04:39.155 06:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.155 06:02:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:39.155 06:02:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.155 06:02:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.155 06:02:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.155 ************************************ 00:04:39.155 START TEST default_locks_via_rpc 00:04:39.155 ************************************ 00:04:39.155 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:39.155 06:02:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58978 00:04:39.155 06:02:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58978 00:04:39.155 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58978 ']' 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.156 06:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.156 [2024-11-20 06:02:58.536059] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:39.156 [2024-11-20 06:02:58.536191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58978 ] 00:04:39.156 [2024-11-20 06:02:58.697584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.415 [2024-11-20 06:02:58.800804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.987 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.987 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58978 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58978 00:04:39.988 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58978 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58978 ']' 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58978 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58978 00:04:40.248 killing process with pid 58978 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58978' 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58978 00:04:40.248 06:02:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58978 00:04:41.629 ************************************ 00:04:41.629 END TEST default_locks_via_rpc 00:04:41.629 ************************************ 00:04:41.629 00:04:41.629 real 0m2.718s 00:04:41.629 user 0m2.754s 00:04:41.629 sys 0m0.464s 00:04:41.629 06:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.629 06:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.629 06:03:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.629 06:03:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.629 06:03:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.629 06:03:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.629 ************************************ 00:04:41.629 START TEST non_locking_app_on_locked_coremask 00:04:41.629 ************************************ 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59035 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59035 /var/tmp/spdk.sock 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59035 ']' 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.629 06:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.889 [2024-11-20 06:03:01.320412] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:41.889 [2024-11-20 06:03:01.320563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:04:41.889 [2024-11-20 06:03:01.478994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.150 [2024-11-20 06:03:01.581826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59051 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59051 /var/tmp/spdk2.sock 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59051 ']' 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.720 06:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.720 [2024-11-20 06:03:02.250866] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:42.720 [2024-11-20 06:03:02.251175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:04:42.981 [2024-11-20 06:03:02.427780] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.981 [2024-11-20 06:03:02.427850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.242 [2024-11-20 06:03:02.635269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.183 06:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.183 06:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:44.183 06:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59035 00:04:44.183 06:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59035 00:04:44.183 06:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59035 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59035 ']' 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59035 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.443 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59035 00:04:44.778 killing process with pid 59035 00:04:44.778 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.778 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.778 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59035' 00:04:44.778 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59035 00:04:44.778 06:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59035 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59051 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59051 ']' 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59051 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59051 00:04:48.073 killing process with pid 59051 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59051' 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59051 00:04:48.073 06:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59051 00:04:49.011 00:04:49.011 real 0m7.379s 00:04:49.011 user 0m7.605s 00:04:49.011 sys 0m0.863s 00:04:49.011 ************************************ 00:04:49.011 END TEST non_locking_app_on_locked_coremask 00:04:49.011 06:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.011 06:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.011 ************************************ 00:04:49.271 06:03:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:49.271 06:03:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.271 06:03:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.271 06:03:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 ************************************ 00:04:49.271 START TEST locking_app_on_unlocked_coremask 00:04:49.271 ************************************ 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59159 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59159 /var/tmp/spdk.sock 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59159 ']' 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.271 06:03:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 [2024-11-20 06:03:08.785888] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:49.271 [2024-11-20 06:03:08.786070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:04:49.531 [2024-11-20 06:03:08.948826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.531 [2024-11-20 06:03:08.948887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.531 [2024-11-20 06:03:09.051087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59174 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59174 /var/tmp/spdk2.sock 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59174 ']' 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.099 06:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:50.099 [2024-11-20 06:03:09.720964] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:50.099 [2024-11-20 06:03:09.721252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:04:50.360 [2024-11-20 06:03:09.897180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.622 [2024-11-20 06:03:10.099716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59174 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59174 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59159 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59159 ']' 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59159 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59159 00:04:52.001 killing process with pid 59159 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59159' 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59159 00:04:52.001 06:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59159 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59174 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59174 ']' 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59174 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59174 00:04:55.320 killing process with pid 59174 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59174' 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59174 00:04:55.320 06:03:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59174 00:04:56.255 00:04:56.255 real 0m6.869s 00:04:56.255 user 0m7.090s 00:04:56.255 sys 0m0.852s 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.255 ************************************ 00:04:56.255 END TEST locking_app_on_unlocked_coremask 00:04:56.255 ************************************ 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.255 06:03:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:56.255 06:03:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.255 06:03:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.255 06:03:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.255 ************************************ 00:04:56.255 START TEST locking_app_on_locked_coremask 00:04:56.255 ************************************ 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59271 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59271 /var/tmp/spdk.sock 00:04:56.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.255 06:03:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.255 [2024-11-20 06:03:15.692211] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:56.255 [2024-11-20 06:03:15.692334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:04:56.255 [2024-11-20 06:03:15.850938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.515 [2024-11-20 06:03:15.954949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59287 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59287 /var/tmp/spdk2.sock 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59287 /var/tmp/spdk2.sock 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:57.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59287 ']' 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.088 06:03:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.088 [2024-11-20 06:03:16.631482] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:57.088 [2024-11-20 06:03:16.631614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:04:57.347 [2024-11-20 06:03:16.806384] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59271 has claimed it. 00:04:57.347 [2024-11-20 06:03:16.806438] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.913 ERROR: process (pid: 59287) is no longer running 00:04:57.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59287) - No such process 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59271 ']' 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59271 00:04:57.913 killing process with pid 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59271' 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59271 00:04:57.913 06:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59271 00:04:59.825 00:04:59.825 real 0m3.358s 00:04:59.825 user 0m3.573s 00:04:59.825 sys 0m0.534s 00:04:59.825 06:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.825 ************************************ 00:04:59.825 END TEST locking_app_on_locked_coremask 00:04:59.825 ************************************ 00:04:59.825 06:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.825 06:03:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:59.825 06:03:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.825 06:03:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.825 06:03:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.825 ************************************ 00:04:59.825 START TEST locking_overlapped_coremask 00:04:59.825 ************************************ 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59346 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59346 /var/tmp/spdk.sock 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59346 ']' 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.825 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.825 [2024-11-20 06:03:19.110358] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:04:59.825 [2024-11-20 06:03:19.110657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:04:59.825 [2024-11-20 06:03:19.270897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.825 [2024-11-20 06:03:19.375390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.825 [2024-11-20 06:03:19.376033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.825 [2024-11-20 06:03:19.376181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59364 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59364 /var/tmp/spdk2.sock 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59364 /var/tmp/spdk2.sock 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59364 /var/tmp/spdk2.sock 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59364 ']' 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.395 06:03:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.655 [2024-11-20 06:03:20.045674] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:00.655 [2024-11-20 06:03:20.045947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:05:00.655 [2024-11-20 06:03:20.224257] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59346 has claimed it. 00:05:00.655 [2024-11-20 06:03:20.227519] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.224 ERROR: process (pid: 59364) is no longer running 00:05:01.224 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59364) - No such process 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.224 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59346 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59346 ']' 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59346 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59346 00:05:01.225 killing process with pid 59346 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59346' 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59346 00:05:01.225 06:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59346 00:05:02.605 00:05:02.605 real 0m3.135s 00:05:02.605 user 0m8.427s 00:05:02.605 sys 0m0.458s 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.605 ************************************ 00:05:02.605 END TEST locking_overlapped_coremask 00:05:02.605 ************************************ 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.605 06:03:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:02.605 06:03:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.605 06:03:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.605 06:03:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.605 ************************************ 00:05:02.605 START TEST locking_overlapped_coremask_via_rpc 00:05:02.605 ************************************ 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59417 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59417 /var/tmp/spdk.sock 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59417 ']' 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:02.605 06:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.864 [2024-11-20 06:03:22.296134] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:02.864 [2024-11-20 06:03:22.296255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59417 ] 00:05:02.864 [2024-11-20 06:03:22.457184] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.864 [2024-11-20 06:03:22.457229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.121 [2024-11-20 06:03:22.556849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.121 [2024-11-20 06:03:22.557191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.121 [2024-11-20 06:03:22.557205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59435 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59435 /var/tmp/spdk2.sock 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59435 ']' 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.686 06:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.686 [2024-11-20 06:03:23.254347] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:03.686 [2024-11-20 06:03:23.254514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:05:03.944 [2024-11-20 06:03:23.444436] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.944 [2024-11-20 06:03:23.448504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.201 [2024-11-20 06:03:23.650847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.201 [2024-11-20 06:03:23.650912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.201 [2024-11-20 06:03:23.650934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.572 [2024-11-20 06:03:24.813636] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59417 has claimed it. 00:05:05.572 request: 00:05:05.572 { 00:05:05.572 "method": "framework_enable_cpumask_locks", 00:05:05.572 "req_id": 1 00:05:05.572 } 00:05:05.572 Got JSON-RPC error response 00:05:05.572 response: 00:05:05.572 { 00:05:05.572 "code": -32603, 00:05:05.572 "message": "Failed to claim CPU core: 2" 00:05:05.572 } 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59417 /var/tmp/spdk.sock 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59417 ']' 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.572 06:03:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59435 /var/tmp/spdk2.sock 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59435 ']' 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.572 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.830 00:05:05.830 real 0m3.037s 00:05:05.830 user 0m1.124s 00:05:05.830 sys 0m0.141s 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.830 06:03:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 ************************************ 00:05:05.830 END TEST locking_overlapped_coremask_via_rpc 00:05:05.830 ************************************ 00:05:05.830 06:03:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:05.830 06:03:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59417 ]] 00:05:05.830 06:03:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59417 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59417 ']' 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59417 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59417 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.830 killing process with pid 59417 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59417' 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59417 00:05:05.830 06:03:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59417 00:05:07.203 06:03:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59435 ]] 00:05:07.203 06:03:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59435 00:05:07.203 06:03:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59435 ']' 00:05:07.203 06:03:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59435 00:05:07.203 06:03:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:07.203 06:03:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.203 06:03:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59435 00:05:07.460 06:03:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:07.460 06:03:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:07.460 killing process with pid 59435 00:05:07.460 06:03:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59435' 00:05:07.460 06:03:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59435 00:05:07.460 06:03:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59435 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59417 ]] 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59417 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59417 ']' 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59417 00:05:08.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59417) - No such process 00:05:08.831 Process with pid 59417 is not found 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59417 is not found' 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59435 ]] 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59435 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59435 ']' 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59435 00:05:08.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59435) - No such process 00:05:08.831 Process with pid 59435 is not found 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59435 is not found' 00:05:08.831 06:03:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.831 ************************************ 00:05:08.831 END TEST cpu_locks 00:05:08.831 ************************************ 00:05:08.831 00:05:08.831 real 0m32.550s 00:05:08.831 user 0m55.482s 00:05:08.831 sys 0m4.601s 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.831 06:03:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.831 00:05:08.831 real 0m58.806s 00:05:08.831 user 1m47.605s 00:05:08.831 sys 0m7.449s 00:05:08.831 06:03:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.831 ************************************ 00:05:08.831 END TEST event 00:05:08.831 ************************************ 00:05:08.831 06:03:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.831 06:03:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:08.831 06:03:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.831 06:03:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.831 06:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:08.831 ************************************ 00:05:08.831 START TEST thread 00:05:08.831 ************************************ 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:08.831 * Looking for test storage... 00:05:08.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.831 06:03:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.831 06:03:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.831 06:03:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.831 06:03:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.831 06:03:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.831 06:03:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.831 06:03:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.831 06:03:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.831 06:03:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.831 06:03:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.831 06:03:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.831 06:03:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:08.831 06:03:28 thread -- scripts/common.sh@345 -- # : 1 00:05:08.831 06:03:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.831 06:03:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.831 06:03:28 thread -- scripts/common.sh@365 -- # decimal 1 00:05:08.831 06:03:28 thread -- scripts/common.sh@353 -- # local d=1 00:05:08.831 06:03:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.831 06:03:28 thread -- scripts/common.sh@355 -- # echo 1 00:05:08.831 06:03:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.831 06:03:28 thread -- scripts/common.sh@366 -- # decimal 2 00:05:08.831 06:03:28 thread -- scripts/common.sh@353 -- # local d=2 00:05:08.831 06:03:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.831 06:03:28 thread -- scripts/common.sh@355 -- # echo 2 00:05:08.831 06:03:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.831 06:03:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.831 06:03:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.831 06:03:28 thread -- scripts/common.sh@368 -- # return 0 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.831 --rc genhtml_branch_coverage=1 00:05:08.831 --rc genhtml_function_coverage=1 00:05:08.831 --rc genhtml_legend=1 00:05:08.831 --rc geninfo_all_blocks=1 00:05:08.831 --rc geninfo_unexecuted_blocks=1 00:05:08.831 00:05:08.831 ' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.831 --rc genhtml_branch_coverage=1 00:05:08.831 --rc genhtml_function_coverage=1 00:05:08.831 --rc genhtml_legend=1 00:05:08.831 --rc geninfo_all_blocks=1 00:05:08.831 --rc geninfo_unexecuted_blocks=1 00:05:08.831 00:05:08.831 ' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.831 --rc genhtml_branch_coverage=1 00:05:08.831 --rc genhtml_function_coverage=1 00:05:08.831 --rc genhtml_legend=1 00:05:08.831 --rc geninfo_all_blocks=1 00:05:08.831 --rc geninfo_unexecuted_blocks=1 00:05:08.831 00:05:08.831 ' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.831 --rc genhtml_branch_coverage=1 00:05:08.831 --rc genhtml_function_coverage=1 00:05:08.831 --rc genhtml_legend=1 00:05:08.831 --rc geninfo_all_blocks=1 00:05:08.831 --rc geninfo_unexecuted_blocks=1 00:05:08.831 00:05:08.831 ' 00:05:08.831 06:03:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.831 06:03:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.831 ************************************ 00:05:08.831 START TEST thread_poller_perf 00:05:08.831 ************************************ 00:05:08.831 06:03:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.831 [2024-11-20 06:03:28.297802] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:08.831 [2024-11-20 06:03:28.297917] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:05:08.831 [2024-11-20 06:03:28.461107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.089 [2024-11-20 06:03:28.559626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.089 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:10.496 [2024-11-20T06:03:30.129Z] ====================================== 00:05:10.496 [2024-11-20T06:03:30.129Z] busy:2614821896 (cyc) 00:05:10.496 [2024-11-20T06:03:30.129Z] total_run_count: 304000 00:05:10.496 [2024-11-20T06:03:30.129Z] tsc_hz: 2600000000 (cyc) 00:05:10.497 [2024-11-20T06:03:30.130Z] ====================================== 00:05:10.497 [2024-11-20T06:03:30.130Z] poller_cost: 8601 (cyc), 3308 (nsec) 00:05:10.497 00:05:10.497 real 0m1.453s 00:05:10.497 user 0m1.279s 00:05:10.497 sys 0m0.067s 00:05:10.497 06:03:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.497 06:03:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.497 ************************************ 00:05:10.497 END TEST thread_poller_perf 00:05:10.497 ************************************ 00:05:10.497 06:03:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:10.497 06:03:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:10.497 06:03:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.497 06:03:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.497 ************************************ 00:05:10.497 START TEST thread_poller_perf 00:05:10.497 ************************************ 00:05:10.497 06:03:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:10.497 [2024-11-20 06:03:29.790046] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:10.497 [2024-11-20 06:03:29.790160] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:05:10.497 [2024-11-20 06:03:29.949946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.497 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.497 [2024-11-20 06:03:30.066838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.865 [2024-11-20T06:03:31.498Z] ====================================== 00:05:11.865 [2024-11-20T06:03:31.498Z] busy:2603295466 (cyc) 00:05:11.865 [2024-11-20T06:03:31.498Z] total_run_count: 3943000 00:05:11.865 [2024-11-20T06:03:31.498Z] tsc_hz: 2600000000 (cyc) 00:05:11.865 [2024-11-20T06:03:31.498Z] ====================================== 00:05:11.865 [2024-11-20T06:03:31.498Z] poller_cost: 660 (cyc), 253 (nsec) 00:05:11.865 00:05:11.865 real 0m1.467s 00:05:11.865 user 0m1.296s 00:05:11.865 sys 0m0.064s 00:05:11.865 06:03:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.865 06:03:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.865 ************************************ 00:05:11.865 END TEST thread_poller_perf 00:05:11.865 ************************************ 00:05:11.865 06:03:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:11.865 00:05:11.865 real 0m3.139s 00:05:11.865 user 0m2.686s 00:05:11.865 sys 0m0.242s 00:05:11.865 06:03:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.865 06:03:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.865 ************************************ 00:05:11.865 END TEST thread 00:05:11.865 ************************************ 00:05:11.865 06:03:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:11.865 06:03:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:11.865 06:03:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.865 06:03:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.865 06:03:31 -- common/autotest_common.sh@10 -- # set +x 00:05:11.865 ************************************ 00:05:11.865 START TEST app_cmdline 00:05:11.865 ************************************ 00:05:11.865 06:03:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:11.865 * Looking for test storage... 00:05:11.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:11.865 06:03:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.865 06:03:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.865 06:03:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.865 06:03:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.865 06:03:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.866 06:03:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.866 --rc genhtml_branch_coverage=1 00:05:11.866 --rc genhtml_function_coverage=1 00:05:11.866 --rc genhtml_legend=1 00:05:11.866 --rc geninfo_all_blocks=1 00:05:11.866 --rc geninfo_unexecuted_blocks=1 00:05:11.866 00:05:11.866 ' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.866 --rc genhtml_branch_coverage=1 00:05:11.866 --rc genhtml_function_coverage=1 00:05:11.866 --rc genhtml_legend=1 00:05:11.866 --rc geninfo_all_blocks=1 00:05:11.866 --rc geninfo_unexecuted_blocks=1 00:05:11.866 00:05:11.866 ' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.866 --rc genhtml_branch_coverage=1 00:05:11.866 --rc genhtml_function_coverage=1 00:05:11.866 --rc genhtml_legend=1 00:05:11.866 --rc geninfo_all_blocks=1 00:05:11.866 --rc geninfo_unexecuted_blocks=1 00:05:11.866 00:05:11.866 ' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.866 --rc genhtml_branch_coverage=1 00:05:11.866 --rc genhtml_function_coverage=1 00:05:11.866 --rc genhtml_legend=1 00:05:11.866 --rc geninfo_all_blocks=1 00:05:11.866 --rc geninfo_unexecuted_blocks=1 00:05:11.866 00:05:11.866 ' 00:05:11.866 06:03:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:11.866 06:03:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59715 00:05:11.866 06:03:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59715 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59715 ']' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.866 06:03:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:11.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.866 06:03:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.866 [2024-11-20 06:03:31.486533] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:11.866 [2024-11-20 06:03:31.486655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:05:12.123 [2024-11-20 06:03:31.642966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.123 [2024-11-20 06:03:31.741852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.693 06:03:32 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.693 06:03:32 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:12.693 06:03:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:12.953 { 00:05:12.953 "version": "SPDK v25.01-pre git sha1 9b64b1304", 00:05:12.953 "fields": { 00:05:12.953 "major": 25, 00:05:12.953 "minor": 1, 00:05:12.953 "patch": 0, 00:05:12.953 "suffix": "-pre", 00:05:12.953 "commit": "9b64b1304" 00:05:12.953 } 00:05:12.953 } 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:12.953 06:03:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:12.953 06:03:32 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:13.211 request: 00:05:13.211 { 00:05:13.211 "method": "env_dpdk_get_mem_stats", 00:05:13.211 "req_id": 1 00:05:13.211 } 00:05:13.211 Got JSON-RPC error response 00:05:13.211 response: 00:05:13.211 { 00:05:13.211 "code": -32601, 00:05:13.211 "message": "Method not found" 00:05:13.211 } 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.211 06:03:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59715 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59715 ']' 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59715 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59715 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.211 killing process with pid 59715 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59715' 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@971 -- # kill 59715 00:05:13.211 06:03:32 app_cmdline -- common/autotest_common.sh@976 -- # wait 59715 00:05:15.146 00:05:15.146 real 0m3.013s 00:05:15.146 user 0m3.332s 00:05:15.146 sys 0m0.419s 00:05:15.146 06:03:34 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.146 06:03:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.146 ************************************ 00:05:15.146 END TEST app_cmdline 00:05:15.146 ************************************ 00:05:15.146 06:03:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:15.146 06:03:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.146 06:03:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.146 06:03:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.146 ************************************ 00:05:15.146 START TEST version 00:05:15.146 ************************************ 00:05:15.146 06:03:34 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:15.146 * Looking for test storage... 00:05:15.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.147 06:03:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.147 06:03:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.147 06:03:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.147 06:03:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.147 06:03:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.147 06:03:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.147 06:03:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.147 06:03:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.147 06:03:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.147 06:03:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.147 06:03:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.147 06:03:34 version -- scripts/common.sh@344 -- # case "$op" in 00:05:15.147 06:03:34 version -- scripts/common.sh@345 -- # : 1 00:05:15.147 06:03:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.147 06:03:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.147 06:03:34 version -- scripts/common.sh@365 -- # decimal 1 00:05:15.147 06:03:34 version -- scripts/common.sh@353 -- # local d=1 00:05:15.147 06:03:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.147 06:03:34 version -- scripts/common.sh@355 -- # echo 1 00:05:15.147 06:03:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.147 06:03:34 version -- scripts/common.sh@366 -- # decimal 2 00:05:15.147 06:03:34 version -- scripts/common.sh@353 -- # local d=2 00:05:15.147 06:03:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.147 06:03:34 version -- scripts/common.sh@355 -- # echo 2 00:05:15.147 06:03:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.147 06:03:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.147 06:03:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.147 06:03:34 version -- scripts/common.sh@368 -- # return 0 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.147 --rc genhtml_branch_coverage=1 00:05:15.147 --rc genhtml_function_coverage=1 00:05:15.147 --rc genhtml_legend=1 00:05:15.147 --rc geninfo_all_blocks=1 00:05:15.147 --rc geninfo_unexecuted_blocks=1 00:05:15.147 00:05:15.147 ' 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.147 --rc genhtml_branch_coverage=1 00:05:15.147 --rc genhtml_function_coverage=1 00:05:15.147 --rc genhtml_legend=1 00:05:15.147 --rc geninfo_all_blocks=1 00:05:15.147 --rc geninfo_unexecuted_blocks=1 00:05:15.147 00:05:15.147 ' 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.147 --rc genhtml_branch_coverage=1 00:05:15.147 --rc genhtml_function_coverage=1 00:05:15.147 --rc genhtml_legend=1 00:05:15.147 --rc geninfo_all_blocks=1 00:05:15.147 --rc geninfo_unexecuted_blocks=1 00:05:15.147 00:05:15.147 ' 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.147 --rc genhtml_branch_coverage=1 00:05:15.147 --rc genhtml_function_coverage=1 00:05:15.147 --rc genhtml_legend=1 00:05:15.147 --rc geninfo_all_blocks=1 00:05:15.147 --rc geninfo_unexecuted_blocks=1 00:05:15.147 00:05:15.147 ' 00:05:15.147 06:03:34 version -- app/version.sh@17 -- # get_header_version major 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # cut -f2 00:05:15.147 06:03:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:15.147 06:03:34 version -- app/version.sh@17 -- # major=25 00:05:15.147 06:03:34 version -- app/version.sh@18 -- # get_header_version minor 00:05:15.147 06:03:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # cut -f2 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:15.147 06:03:34 version -- app/version.sh@18 -- # minor=1 00:05:15.147 06:03:34 version -- app/version.sh@19 -- # get_header_version patch 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # cut -f2 00:05:15.147 06:03:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:15.147 06:03:34 version -- app/version.sh@19 -- # patch=0 00:05:15.147 06:03:34 version -- app/version.sh@20 -- # get_header_version suffix 00:05:15.147 06:03:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:15.147 06:03:34 version -- app/version.sh@14 -- # cut -f2 00:05:15.147 06:03:34 version -- app/version.sh@20 -- # suffix=-pre 00:05:15.147 06:03:34 version -- app/version.sh@22 -- # version=25.1 00:05:15.147 06:03:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:15.147 06:03:34 version -- app/version.sh@28 -- # version=25.1rc0 00:05:15.147 06:03:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:15.147 06:03:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:15.147 06:03:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:15.147 06:03:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:15.147 00:05:15.147 real 0m0.206s 00:05:15.147 user 0m0.131s 00:05:15.147 sys 0m0.104s 00:05:15.147 06:03:34 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.147 06:03:34 version -- common/autotest_common.sh@10 -- # set +x 00:05:15.147 ************************************ 00:05:15.147 END TEST version 00:05:15.147 ************************************ 00:05:15.147 06:03:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:15.148 06:03:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:15.148 06:03:34 -- spdk/autotest.sh@194 -- # uname -s 00:05:15.148 06:03:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:15.148 06:03:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:15.148 06:03:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:15.148 06:03:34 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:15.148 06:03:34 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:15.148 06:03:34 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:15.148 06:03:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.148 06:03:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.148 ************************************ 00:05:15.148 START TEST blockdev_nvme 00:05:15.148 ************************************ 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:15.148 * Looking for test storage... 00:05:15.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.148 06:03:34 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.148 --rc genhtml_branch_coverage=1 00:05:15.148 --rc genhtml_function_coverage=1 00:05:15.148 --rc genhtml_legend=1 00:05:15.148 --rc geninfo_all_blocks=1 00:05:15.148 --rc geninfo_unexecuted_blocks=1 00:05:15.148 00:05:15.148 ' 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.148 --rc genhtml_branch_coverage=1 00:05:15.148 --rc genhtml_function_coverage=1 00:05:15.148 --rc genhtml_legend=1 00:05:15.148 --rc geninfo_all_blocks=1 00:05:15.148 --rc geninfo_unexecuted_blocks=1 00:05:15.148 00:05:15.148 ' 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.148 --rc genhtml_branch_coverage=1 00:05:15.148 --rc genhtml_function_coverage=1 00:05:15.148 --rc genhtml_legend=1 00:05:15.148 --rc geninfo_all_blocks=1 00:05:15.148 --rc geninfo_unexecuted_blocks=1 00:05:15.148 00:05:15.148 ' 00:05:15.148 06:03:34 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.148 --rc genhtml_branch_coverage=1 00:05:15.148 --rc genhtml_function_coverage=1 00:05:15.148 --rc genhtml_legend=1 00:05:15.148 --rc geninfo_all_blocks=1 00:05:15.148 --rc geninfo_unexecuted_blocks=1 00:05:15.148 00:05:15.148 ' 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.148 06:03:34 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:15.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:15.148 06:03:34 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59887 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59887 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59887 ']' 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.149 06:03:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:15.149 06:03:34 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:15.410 [2024-11-20 06:03:34.833310] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:15.410 [2024-11-20 06:03:34.833434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:05:15.410 [2024-11-20 06:03:34.994044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.669 [2024-11-20 06:03:35.092997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.259 06:03:35 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.259 06:03:35 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.259 06:03:35 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:16.259 06:03:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.259 06:03:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:16.518 06:03:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:16.518 06:03:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:16.519 06:03:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1c0ac000-a10f-4b56-a3ac-9acea68e055b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1c0ac000-a10f-4b56-a3ac-9acea68e055b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "736dfa09-f3ca-436c-a9ed-01a30b2f9189"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "736dfa09-f3ca-436c-a9ed-01a30b2f9189",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a6d3c768-38c2-4459-8864-c601ce0fd1e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6d3c768-38c2-4459-8864-c601ce0fd1e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "74fcf32c-fd14-4e2f-88c2-0a5ba02d22a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74fcf32c-fd14-4e2f-88c2-0a5ba02d22a3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "76d30d74-7a63-4392-a0a7-d7adf5136759"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76d30d74-7a63-4392-a0a7-d7adf5136759",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "960c8eeb-c9de-4535-a65c-384fe99c688e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "960c8eeb-c9de-4535-a65c-384fe99c688e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:16.776 06:03:36 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:16.776 06:03:36 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:16.776 06:03:36 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:16.776 06:03:36 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59887 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59887 ']' 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59887 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59887 00:05:16.776 killing process with pid 59887 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59887' 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59887 00:05:16.776 06:03:36 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59887 00:05:18.192 06:03:37 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:18.192 06:03:37 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:18.192 06:03:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:05:18.192 06:03:37 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.192 06:03:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:18.192 ************************************ 00:05:18.192 START TEST bdev_hello_world 00:05:18.192 ************************************ 00:05:18.192 06:03:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:18.192 [2024-11-20 06:03:37.759956] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:18.192 [2024-11-20 06:03:37.760074] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:05:18.449 [2024-11-20 06:03:37.918511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.449 [2024-11-20 06:03:38.024160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.015 [2024-11-20 06:03:38.556317] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:19.015 [2024-11-20 06:03:38.556375] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:19.015 [2024-11-20 06:03:38.556397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:19.015 [2024-11-20 06:03:38.558879] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:19.015 [2024-11-20 06:03:38.559250] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:19.015 [2024-11-20 06:03:38.559277] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:19.015 [2024-11-20 06:03:38.559753] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:19.015 00:05:19.015 [2024-11-20 06:03:38.559779] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:19.946 ************************************ 00:05:19.946 END TEST bdev_hello_world 00:05:19.946 ************************************ 00:05:19.946 00:05:19.946 real 0m1.579s 00:05:19.946 user 0m1.296s 00:05:19.946 sys 0m0.176s 00:05:19.946 06:03:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.946 06:03:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:19.946 06:03:39 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:19.946 06:03:39 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.946 06:03:39 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.946 06:03:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:19.946 ************************************ 00:05:19.946 START TEST bdev_bounds 00:05:19.946 ************************************ 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:05:19.946 Process bdevio pid: 60013 00:05:19.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60013 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60013' 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60013 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 60013 ']' 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.946 06:03:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:19.946 [2024-11-20 06:03:39.373276] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:19.947 [2024-11-20 06:03:39.373558] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60013 ] 00:05:19.947 [2024-11-20 06:03:39.532798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.204 [2024-11-20 06:03:39.635320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.204 [2024-11-20 06:03:39.635721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.204 [2024-11-20 06:03:39.635735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.767 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.767 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:05:20.767 06:03:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:20.767 I/O targets: 00:05:20.767 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:20.767 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:20.767 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:20.767 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:20.767 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:20.767 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:20.767 00:05:20.767 00:05:20.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.767 http://cunit.sourceforge.net/ 00:05:20.767 00:05:20.767 00:05:20.767 Suite: bdevio tests on: Nvme3n1 00:05:20.767 Test: blockdev write read block ...passed 00:05:20.767 Test: blockdev write zeroes read block ...passed 00:05:20.767 Test: blockdev write zeroes read no split ...passed 00:05:20.767 Test: blockdev write zeroes read split ...passed 00:05:20.767 Test: blockdev write zeroes read split partial ...passed 00:05:20.767 Test: blockdev reset ...[2024-11-20 06:03:40.349668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:20.767 [2024-11-20 06:03:40.352478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:20.767 passed 00:05:20.767 Test: blockdev write read 8 blocks ...passed 00:05:20.767 Test: blockdev write read size > 128k ...passed 00:05:20.767 Test: blockdev write read invalid size ...passed 00:05:20.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.767 Test: blockdev write read max offset ...passed 00:05:20.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.767 Test: blockdev writev readv 8 blocks ...passed 00:05:20.767 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.767 Test: blockdev writev readv block ...passed 00:05:20.767 Test: blockdev writev readv size > 128k ...passed 00:05:20.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.767 Test: blockdev comparev and writev ...[2024-11-20 06:03:40.359468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1e0a000 len:0x1000 00:05:20.767 [2024-11-20 06:03:40.359634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:20.767 passed 00:05:20.767 Test: blockdev nvme passthru rw ...passed 00:05:20.767 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.767 Test: blockdev nvme admin passthru ...[2024-11-20 06:03:40.360419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:20.767 [2024-11-20 06:03:40.360454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:20.767 passed 00:05:20.767 Test: blockdev copy ...passed 00:05:20.767 Suite: bdevio tests on: Nvme2n3 00:05:20.767 Test: blockdev write read block ...passed 00:05:20.767 Test: blockdev write zeroes read block ...passed 00:05:20.767 Test: blockdev write zeroes read no split ...passed 00:05:20.767 Test: blockdev write zeroes read split ...passed 00:05:21.025 Test: blockdev write zeroes read split partial ...passed 00:05:21.025 Test: blockdev reset ...[2024-11-20 06:03:40.419205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:21.025 [2024-11-20 06:03:40.422301] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:21.025 passed 00:05:21.025 Test: blockdev write read 8 blocks ...passed 00:05:21.025 Test: blockdev write read size > 128k ...passed 00:05:21.025 Test: blockdev write read invalid size ...passed 00:05:21.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.025 Test: blockdev write read max offset ...passed 00:05:21.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.026 Test: blockdev writev readv 8 blocks ...passed 00:05:21.026 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.026 Test: blockdev writev readv block ...passed 00:05:21.026 Test: blockdev writev readv size > 128k ...passed 00:05:21.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.026 Test: blockdev comparev and writev ...[2024-11-20 06:03:40.429036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6206000 len:0x1000 00:05:21.026 [2024-11-20 06:03:40.429082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme passthru rw ...passed 00:05:21.026 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:03:40.429616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:21.026 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:21.026 [2024-11-20 06:03:40.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev copy ...passed 00:05:21.026 Suite: bdevio tests on: Nvme2n2 00:05:21.026 Test: blockdev write read block ...passed 00:05:21.026 Test: blockdev write zeroes read block ...passed 00:05:21.026 Test: blockdev write zeroes read no split ...passed 00:05:21.026 Test: blockdev write zeroes read split ...passed 00:05:21.026 Test: blockdev write zeroes read split partial ...passed 00:05:21.026 Test: blockdev reset ...[2024-11-20 06:03:40.489568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:21.026 passed 00:05:21.026 Test: blockdev write read 8 blocks ...[2024-11-20 06:03:40.492590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:21.026 passed 00:05:21.026 Test: blockdev write read size > 128k ...passed 00:05:21.026 Test: blockdev write read invalid size ...passed 00:05:21.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.026 Test: blockdev write read max offset ...passed 00:05:21.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.026 Test: blockdev writev readv 8 blocks ...passed 00:05:21.026 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.026 Test: blockdev writev readv block ...passed 00:05:21.026 Test: blockdev writev readv size > 128k ...passed 00:05:21.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.026 Test: blockdev comparev and writev ...[2024-11-20 06:03:40.498663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7a3c000 len:0x1000 00:05:21.026 [2024-11-20 06:03:40.498706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme passthru rw ...passed 00:05:21.026 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.026 Test: blockdev nvme admin passthru ...[2024-11-20 06:03:40.499222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:21.026 [2024-11-20 06:03:40.499249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev copy ...passed 00:05:21.026 Suite: bdevio tests on: Nvme2n1 00:05:21.026 Test: blockdev write read block ...passed 00:05:21.026 Test: blockdev write zeroes read block ...passed 00:05:21.026 Test: blockdev write zeroes read no split ...passed 00:05:21.026 Test: blockdev write zeroes read split ...passed 00:05:21.026 Test: blockdev write zeroes read split partial ...passed 00:05:21.026 Test: blockdev reset ...[2024-11-20 06:03:40.543781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:21.026 [2024-11-20 06:03:40.546739] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:05:21.026 00:05:21.026 Test: blockdev write read 8 blocks ...passed 00:05:21.026 Test: blockdev write read size > 128k ...passed 00:05:21.026 Test: blockdev write read invalid size ...passed 00:05:21.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.026 Test: blockdev write read max offset ...passed 00:05:21.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.026 Test: blockdev writev readv 8 blocks ...passed 00:05:21.026 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.026 Test: blockdev writev readv block ...passed 00:05:21.026 Test: blockdev writev readv size > 128k ...passed 00:05:21.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.026 Test: blockdev comparev and writev ...[2024-11-20 06:03:40.553211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7a38000 len:0x1000 00:05:21.026 [2024-11-20 06:03:40.553345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme passthru rw ...passed 00:05:21.026 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:03:40.554047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:21.026 [2024-11-20 06:03:40.554141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme admin passthru ...passed 00:05:21.026 Test: blockdev copy ...passed 00:05:21.026 Suite: bdevio tests on: Nvme1n1 00:05:21.026 Test: blockdev write read block ...passed 00:05:21.026 Test: blockdev write zeroes read block ...passed 00:05:21.026 Test: blockdev write zeroes read no split ...passed 00:05:21.026 Test: blockdev write zeroes read split ...passed 00:05:21.026 Test: blockdev write zeroes read split partial ...passed 00:05:21.026 Test: blockdev reset ...[2024-11-20 06:03:40.609161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:21.026 [2024-11-20 06:03:40.611858] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:21.026 passed 00:05:21.026 Test: blockdev write read 8 blocks ...passed 00:05:21.026 Test: blockdev write read size > 128k ...passed 00:05:21.026 Test: blockdev write read invalid size ...passed 00:05:21.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.026 Test: blockdev write read max offset ...passed 00:05:21.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.026 Test: blockdev writev readv 8 blocks ...passed 00:05:21.026 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.026 Test: blockdev writev readv block ...passed 00:05:21.026 Test: blockdev writev readv size > 128k ...passed 00:05:21.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.026 Test: blockdev comparev and writev ...[2024-11-20 06:03:40.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7a34000 len:0x1000 00:05:21.026 [2024-11-20 06:03:40.618739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme passthru rw ...passed 00:05:21.026 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:03:40.619425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:21.026 [2024-11-20 06:03:40.619524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:21.026 passed 00:05:21.026 Test: blockdev nvme admin passthru ...passed 00:05:21.026 Test: blockdev copy ...passed 00:05:21.026 Suite: bdevio tests on: Nvme0n1 00:05:21.026 Test: blockdev write read block ...passed 00:05:21.026 Test: blockdev write zeroes read block ...passed 00:05:21.026 Test: blockdev write zeroes read no split ...passed 00:05:21.284 Test: blockdev write zeroes read split ...passed 00:05:21.284 Test: blockdev write zeroes read split partial ...passed 00:05:21.284 Test: blockdev reset ...[2024-11-20 06:03:40.677508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:21.284 [2024-11-20 06:03:40.680207] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:21.284 passed 00:05:21.284 Test: blockdev write read 8 blocks ...passed 00:05:21.284 Test: blockdev write read size > 128k ...passed 00:05:21.284 Test: blockdev write read invalid size ...passed 00:05:21.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.284 Test: blockdev write read max offset ...passed 00:05:21.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.284 Test: blockdev writev readv 8 blocks ...passed 00:05:21.284 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.284 Test: blockdev writev readv block ...passed 00:05:21.284 Test: blockdev writev readv size > 128k ...passed 00:05:21.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.284 Test: blockdev comparev and writev ...passed[2024-11-20 06:03:40.686413] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:21.284 separate metadata which is not supported yet. 00:05:21.284 00:05:21.284 Test: blockdev nvme passthru rw ...passed 00:05:21.284 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:03:40.687062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:21.284 [2024-11-20 06:03:40.687135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:21.284 passed 00:05:21.284 Test: blockdev nvme admin passthru ...passed 00:05:21.284 Test: blockdev copy ...passed 00:05:21.284 00:05:21.284 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.284 suites 6 6 n/a 0 0 00:05:21.284 tests 138 138 138 0 0 00:05:21.284 asserts 893 893 893 0 n/a 00:05:21.284 00:05:21.284 Elapsed time = 1.014 seconds 00:05:21.284 0 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60013 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 60013 ']' 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 60013 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60013 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60013' 00:05:21.284 killing process with pid 60013 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 60013 00:05:21.284 06:03:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 60013 00:05:21.849 06:03:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:21.849 00:05:21.849 real 0m2.083s 00:05:21.849 user 0m5.332s 00:05:21.849 sys 0m0.260s 00:05:21.849 ************************************ 00:05:21.849 END TEST bdev_bounds 00:05:21.849 ************************************ 00:05:21.849 06:03:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.849 06:03:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:21.849 06:03:41 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:21.849 06:03:41 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:21.849 06:03:41 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.849 06:03:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:21.849 ************************************ 00:05:21.849 START TEST bdev_nbd 00:05:21.849 ************************************ 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:21.849 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:21.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60067 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60067 /var/tmp/spdk-nbd.sock 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 60067 ']' 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:21.850 06:03:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:22.107 [2024-11-20 06:03:41.500925] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:22.107 [2024-11-20 06:03:41.501041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:22.107 [2024-11-20 06:03:41.662344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.365 [2024-11-20 06:03:41.764309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.930 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.930 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:22.931 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.188 1+0 records in 00:05:23.188 1+0 records out 00:05:23.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036965 s, 11.1 MB/s 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.188 1+0 records in 00:05:23.188 1+0 records out 00:05:23.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037453 s, 10.9 MB/s 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.188 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.444 06:03:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.444 1+0 records in 00:05:23.444 1+0 records out 00:05:23.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002877 s, 14.2 MB/s 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.444 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.701 1+0 records in 00:05:23.701 1+0 records out 00:05:23.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356313 s, 11.5 MB/s 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.701 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:23.959 1+0 records in 00:05:23.959 1+0 records out 00:05:23.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047432 s, 8.6 MB/s 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:23.959 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:24.216 1+0 records in 00:05:24.216 1+0 records out 00:05:24.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452762 s, 9.0 MB/s 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:24.216 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.473 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd0", 00:05:24.473 "bdev_name": "Nvme0n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd1", 00:05:24.473 "bdev_name": "Nvme1n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd2", 00:05:24.473 "bdev_name": "Nvme2n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd3", 00:05:24.473 "bdev_name": "Nvme2n2" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd4", 00:05:24.473 "bdev_name": "Nvme2n3" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd5", 00:05:24.473 "bdev_name": "Nvme3n1" 00:05:24.473 } 00:05:24.473 ]' 00:05:24.473 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:24.473 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd0", 00:05:24.473 "bdev_name": "Nvme0n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd1", 00:05:24.473 "bdev_name": "Nvme1n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd2", 00:05:24.473 "bdev_name": "Nvme2n1" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd3", 00:05:24.473 "bdev_name": "Nvme2n2" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd4", 00:05:24.473 "bdev_name": "Nvme2n3" 00:05:24.473 }, 00:05:24.473 { 00:05:24.473 "nbd_device": "/dev/nbd5", 00:05:24.473 "bdev_name": "Nvme3n1" 00:05:24.473 } 00:05:24.473 ]' 00:05:24.473 06:03:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.473 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.729 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.730 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.013 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.270 06:03:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.528 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.785 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:26.042 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.043 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:26.299 /dev/nbd0 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.299 1+0 records in 00:05:26.299 1+0 records out 00:05:26.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042122 s, 9.7 MB/s 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.299 06:03:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:26.556 /dev/nbd1 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.556 1+0 records in 00:05:26.556 1+0 records out 00:05:26.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005116 s, 8.0 MB/s 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.556 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:26.814 /dev/nbd10 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.814 1+0 records in 00:05:26.814 1+0 records out 00:05:26.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406705 s, 10.1 MB/s 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:26.814 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:27.074 /dev/nbd11 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.074 1+0 records in 00:05:27.074 1+0 records out 00:05:27.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399433 s, 10.3 MB/s 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:27.074 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:27.335 /dev/nbd12 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.335 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.336 1+0 records in 00:05:27.336 1+0 records out 00:05:27.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133964 s, 3.1 MB/s 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:27.336 06:03:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:27.596 /dev/nbd13 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.596 1+0 records in 00:05:27.596 1+0 records out 00:05:27.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445573 s, 9.2 MB/s 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.596 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd0", 00:05:27.856 "bdev_name": "Nvme0n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd1", 00:05:27.856 "bdev_name": "Nvme1n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd10", 00:05:27.856 "bdev_name": "Nvme2n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd11", 00:05:27.856 "bdev_name": "Nvme2n2" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd12", 00:05:27.856 "bdev_name": "Nvme2n3" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd13", 00:05:27.856 "bdev_name": "Nvme3n1" 00:05:27.856 } 00:05:27.856 ]' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd0", 00:05:27.856 "bdev_name": "Nvme0n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd1", 00:05:27.856 "bdev_name": "Nvme1n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd10", 00:05:27.856 "bdev_name": "Nvme2n1" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd11", 00:05:27.856 "bdev_name": "Nvme2n2" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd12", 00:05:27.856 "bdev_name": "Nvme2n3" 00:05:27.856 }, 00:05:27.856 { 00:05:27.856 "nbd_device": "/dev/nbd13", 00:05:27.856 "bdev_name": "Nvme3n1" 00:05:27.856 } 00:05:27.856 ]' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.856 /dev/nbd1 00:05:27.856 /dev/nbd10 00:05:27.856 /dev/nbd11 00:05:27.856 /dev/nbd12 00:05:27.856 /dev/nbd13' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.856 /dev/nbd1 00:05:27.856 /dev/nbd10 00:05:27.856 /dev/nbd11 00:05:27.856 /dev/nbd12 00:05:27.856 /dev/nbd13' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:27.856 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:27.857 256+0 records in 00:05:27.857 256+0 records out 00:05:27.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00915301 s, 115 MB/s 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.857 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.120 256+0 records in 00:05:28.120 256+0 records out 00:05:28.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179171 s, 5.9 MB/s 00:05:28.120 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.120 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.120 256+0 records in 00:05:28.120 256+0 records out 00:05:28.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12938 s, 8.1 MB/s 00:05:28.120 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.120 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:28.382 256+0 records in 00:05:28.382 256+0 records out 00:05:28.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225433 s, 4.7 MB/s 00:05:28.382 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.382 06:03:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:28.642 256+0 records in 00:05:28.642 256+0 records out 00:05:28.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200125 s, 5.2 MB/s 00:05:28.642 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.642 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:28.642 256+0 records in 00:05:28.642 256+0 records out 00:05:28.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152045 s, 6.9 MB/s 00:05:28.642 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.642 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:28.904 256+0 records in 00:05:28.904 256+0 records out 00:05:28.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192819 s, 5.4 MB/s 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.904 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.164 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.425 06:03:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.687 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.946 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.203 06:03:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.460 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:30.461 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:30.718 malloc_lvol_verify 00:05:30.718 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:30.976 30277d3d-2421-4647-b595-ac9f25b52d11 00:05:30.976 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:31.234 3ae892d6-bea3-43e1-b68d-8770a1e3f321 00:05:31.234 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:31.492 /dev/nbd0 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:31.492 mke2fs 1.47.0 (5-Feb-2023) 00:05:31.492 Discarding device blocks: 0/4096 done 00:05:31.492 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:31.492 00:05:31.492 Allocating group tables: 0/1 done 00:05:31.492 Writing inode tables: 0/1 done 00:05:31.492 Creating journal (1024 blocks): done 00:05:31.492 Writing superblocks and filesystem accounting information: 0/1 done 00:05:31.492 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.492 06:03:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60067 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 60067 ']' 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 60067 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60067 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.751 killing process with pid 60067 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60067' 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 60067 00:05:31.751 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 60067 00:05:32.317 06:03:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:32.317 00:05:32.317 real 0m10.375s 00:05:32.317 user 0m14.486s 00:05:32.317 sys 0m3.350s 00:05:32.318 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.318 ************************************ 00:05:32.318 END TEST bdev_nbd 00:05:32.318 06:03:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:32.318 ************************************ 00:05:32.318 06:03:51 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:32.318 06:03:51 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:32.318 skipping fio tests on NVMe due to multi-ns failures. 00:05:32.318 06:03:51 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:32.318 06:03:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:32.318 06:03:51 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:32.318 06:03:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:05:32.318 06:03:51 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.318 06:03:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:32.318 ************************************ 00:05:32.318 START TEST bdev_verify 00:05:32.318 ************************************ 00:05:32.318 06:03:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:32.318 [2024-11-20 06:03:51.905609] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:32.318 [2024-11-20 06:03:51.905703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:05:32.576 [2024-11-20 06:03:52.051391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.576 [2024-11-20 06:03:52.140712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.576 [2024-11-20 06:03:52.140866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.141 Running I/O for 5 seconds... 00:05:35.448 25600.00 IOPS, 100.00 MiB/s [2024-11-20T06:03:56.065Z] 26112.00 IOPS, 102.00 MiB/s [2024-11-20T06:03:57.000Z] 25856.00 IOPS, 101.00 MiB/s [2024-11-20T06:03:57.933Z] 25328.00 IOPS, 98.94 MiB/s 00:05:38.300 Latency(us) 00:05:38.300 [2024-11-20T06:03:57.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:38.300 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.300 Verification LBA range: start 0x0 length 0xbd0bd 00:05:38.300 Nvme0n1 : 5.05 2000.80 7.82 0.00 0.00 63832.44 11090.71 65334.35 00:05:38.300 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.300 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:38.300 Nvme0n1 : 5.07 2021.70 7.90 0.00 0.00 63173.06 8670.92 65737.65 00:05:38.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.300 Verification LBA range: start 0x0 length 0xa0000 00:05:38.300 Nvme1n1 : 5.06 2000.22 7.81 0.00 0.00 63768.36 11292.36 58881.58 00:05:38.300 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.300 Verification LBA range: start 0xa0000 length 0xa0000 00:05:38.300 Nvme1n1 : 5.07 2021.15 7.90 0.00 0.00 63024.28 9124.63 60898.07 00:05:38.301 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x0 length 0x80000 00:05:38.301 Nvme2n1 : 5.06 1999.65 7.81 0.00 0.00 63671.12 11645.24 58478.28 00:05:38.301 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x80000 length 0x80000 00:05:38.301 Nvme2n1 : 5.07 2020.62 7.89 0.00 0.00 62928.08 9427.10 60494.77 00:05:38.301 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x0 length 0x80000 00:05:38.301 Nvme2n2 : 5.06 1999.07 7.81 0.00 0.00 63568.21 11695.66 61301.37 00:05:38.301 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x80000 length 0x80000 00:05:38.301 Nvme2n2 : 5.07 2020.07 7.89 0.00 0.00 62822.09 9729.58 59284.87 00:05:38.301 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x0 length 0x80000 00:05:38.301 Nvme2n3 : 5.06 1998.50 7.81 0.00 0.00 63468.23 11998.13 64527.75 00:05:38.301 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x80000 length 0x80000 00:05:38.301 Nvme2n3 : 5.07 2019.52 7.89 0.00 0.00 62718.14 9779.99 58074.98 00:05:38.301 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x0 length 0x20000 00:05:38.301 Nvme3n1 : 5.06 1997.90 7.80 0.00 0.00 63358.01 6906.49 67754.14 00:05:38.301 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:38.301 Verification LBA range: start 0x20000 length 0x20000 00:05:38.301 Nvme3n1 : 5.07 2018.95 7.89 0.00 0.00 62634.54 7713.08 59284.87 00:05:38.301 [2024-11-20T06:03:57.934Z] =================================================================================================================== 00:05:38.301 [2024-11-20T06:03:57.934Z] Total : 24118.15 94.21 0.00 0.00 63244.92 6906.49 67754.14 00:05:39.718 00:05:39.718 real 0m7.092s 00:05:39.718 user 0m13.345s 00:05:39.718 sys 0m0.189s 00:05:39.718 06:03:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.718 06:03:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:39.718 ************************************ 00:05:39.718 END TEST bdev_verify 00:05:39.718 ************************************ 00:05:39.718 06:03:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:39.718 06:03:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:05:39.718 06:03:58 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.718 06:03:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:39.718 ************************************ 00:05:39.718 START TEST bdev_verify_big_io 00:05:39.718 ************************************ 00:05:39.718 06:03:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:39.718 [2024-11-20 06:03:59.051414] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:39.718 [2024-11-20 06:03:59.051549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:05:39.718 [2024-11-20 06:03:59.210814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.718 [2024-11-20 06:03:59.309861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.718 [2024-11-20 06:03:59.309973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.652 Running I/O for 5 seconds... 00:05:45.303 2058.00 IOPS, 128.62 MiB/s [2024-11-20T06:04:05.875Z] 2656.00 IOPS, 166.00 MiB/s [2024-11-20T06:04:06.134Z] 3031.00 IOPS, 189.44 MiB/s 00:05:46.501 Latency(us) 00:05:46.501 [2024-11-20T06:04:06.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:46.501 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0x0 length 0xbd0b 00:05:46.501 Nvme0n1 : 5.64 148.95 9.31 0.00 0.00 834340.98 19660.80 909841.33 00:05:46.501 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:46.501 Nvme0n1 : 5.74 115.27 7.20 0.00 0.00 1058356.33 12250.19 1129235.69 00:05:46.501 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0x0 length 0xa000 00:05:46.501 Nvme1n1 : 5.75 157.03 9.81 0.00 0.00 776952.03 65737.65 793691.37 00:05:46.501 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0xa000 length 0xa000 00:05:46.501 Nvme1n1 : 5.77 122.01 7.63 0.00 0.00 985327.32 22282.24 929199.66 00:05:46.501 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0x0 length 0x8000 00:05:46.501 Nvme2n1 : 5.71 156.90 9.81 0.00 0.00 760060.51 65737.65 813049.70 00:05:46.501 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.501 Verification LBA range: start 0x8000 length 0x8000 00:05:46.502 Nvme2n1 : 5.77 121.93 7.62 0.00 0.00 948612.76 24903.68 903388.55 00:05:46.502 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x0 length 0x8000 00:05:46.502 Nvme2n2 : 5.71 156.85 9.80 0.00 0.00 740703.31 66140.95 732390.01 00:05:46.502 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x8000 length 0x8000 00:05:46.502 Nvme2n2 : 5.83 129.10 8.07 0.00 0.00 865433.32 22584.71 929199.66 00:05:46.502 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x0 length 0x8000 00:05:46.502 Nvme2n3 : 5.75 159.89 9.99 0.00 0.00 709808.91 38716.65 864671.90 00:05:46.502 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x8000 length 0x8000 00:05:46.502 Nvme2n3 : 5.90 146.32 9.14 0.00 0.00 745346.20 17039.36 1142141.24 00:05:46.502 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x0 length 0x2000 00:05:46.502 Nvme3n1 : 5.77 173.36 10.83 0.00 0.00 642650.73 4058.19 764653.88 00:05:46.502 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:46.502 Verification LBA range: start 0x2000 length 0x2000 00:05:46.502 Nvme3n1 : 5.96 175.43 10.96 0.00 0.00 603041.44 393.85 2013265.92 00:05:46.502 [2024-11-20T06:04:06.135Z] =================================================================================================================== 00:05:46.502 [2024-11-20T06:04:06.135Z] Total : 1763.03 110.19 0.00 0.00 788307.20 393.85 2013265.92 00:05:47.878 00:05:47.878 real 0m8.444s 00:05:47.878 user 0m15.965s 00:05:47.878 sys 0m0.230s 00:05:47.878 06:04:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.878 06:04:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:47.878 ************************************ 00:05:47.878 END TEST bdev_verify_big_io 00:05:47.878 ************************************ 00:05:47.878 06:04:07 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:47.878 06:04:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:47.878 06:04:07 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.878 06:04:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:47.878 ************************************ 00:05:47.878 START TEST bdev_write_zeroes 00:05:47.878 ************************************ 00:05:47.878 06:04:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:48.135 [2024-11-20 06:04:07.537302] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:48.135 [2024-11-20 06:04:07.537434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:05:48.135 [2024-11-20 06:04:07.697847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.393 [2024-11-20 06:04:07.798871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.958 Running I/O for 1 seconds... 00:05:49.887 72576.00 IOPS, 283.50 MiB/s 00:05:49.887 Latency(us) 00:05:49.887 [2024-11-20T06:04:09.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:49.887 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme0n1 : 1.02 12022.47 46.96 0.00 0.00 10625.25 8973.39 20265.75 00:05:49.887 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme1n1 : 1.02 12008.38 46.91 0.00 0.00 10624.51 8922.98 19862.45 00:05:49.887 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme2n1 : 1.02 11994.69 46.85 0.00 0.00 10591.26 8922.98 19257.50 00:05:49.887 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme2n2 : 1.03 11981.13 46.80 0.00 0.00 10565.04 7864.32 18652.55 00:05:49.887 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme2n3 : 1.03 11967.52 46.75 0.00 0.00 10546.52 5973.86 18652.55 00:05:49.887 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:49.887 Nvme3n1 : 1.03 11954.01 46.70 0.00 0.00 10541.38 5847.83 20064.10 00:05:49.887 [2024-11-20T06:04:09.520Z] =================================================================================================================== 00:05:49.887 [2024-11-20T06:04:09.521Z] Total : 71928.20 280.97 0.00 0.00 10582.33 5847.83 20265.75 00:05:50.818 00:05:50.818 real 0m2.661s 00:05:50.818 user 0m2.358s 00:05:50.818 sys 0m0.188s 00:05:50.818 06:04:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.818 ************************************ 00:05:50.818 END TEST bdev_write_zeroes 00:05:50.818 ************************************ 00:05:50.818 06:04:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:50.818 06:04:10 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.818 06:04:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:50.818 06:04:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.818 06:04:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.818 ************************************ 00:05:50.818 START TEST bdev_json_nonenclosed 00:05:50.818 ************************************ 00:05:50.819 06:04:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.819 [2024-11-20 06:04:10.235060] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:50.819 [2024-11-20 06:04:10.235180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:05:50.819 [2024-11-20 06:04:10.395758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.076 [2024-11-20 06:04:10.496592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.076 [2024-11-20 06:04:10.496675] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:51.076 [2024-11-20 06:04:10.496692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:51.076 [2024-11-20 06:04:10.496701] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.076 00:05:51.076 real 0m0.509s 00:05:51.076 user 0m0.310s 00:05:51.076 sys 0m0.095s 00:05:51.076 06:04:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.077 06:04:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:51.077 ************************************ 00:05:51.077 END TEST bdev_json_nonenclosed 00:05:51.077 ************************************ 00:05:51.334 06:04:10 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:51.334 06:04:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:51.334 06:04:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.334 06:04:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.334 ************************************ 00:05:51.334 START TEST bdev_json_nonarray 00:05:51.334 ************************************ 00:05:51.334 06:04:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:51.334 [2024-11-20 06:04:10.781362] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:51.334 [2024-11-20 06:04:10.781486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:05:51.334 [2024-11-20 06:04:10.939855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.592 [2024-11-20 06:04:11.040802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.592 [2024-11-20 06:04:11.040895] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:51.592 [2024-11-20 06:04:11.040913] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:51.592 [2024-11-20 06:04:11.040922] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.592 00:05:51.592 real 0m0.499s 00:05:51.592 user 0m0.312s 00:05:51.592 sys 0m0.083s 00:05:51.592 06:04:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.592 06:04:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:51.592 ************************************ 00:05:51.592 END TEST bdev_json_nonarray 00:05:51.592 ************************************ 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:05:51.850 06:04:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:51.851 06:04:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:05:51.851 06:04:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:05:51.851 06:04:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:05:51.851 06:04:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:05:51.851 00:05:51.851 real 0m36.644s 00:05:51.851 user 0m56.608s 00:05:51.851 sys 0m5.280s 00:05:51.851 06:04:11 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.851 06:04:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.851 ************************************ 00:05:51.851 END TEST blockdev_nvme 00:05:51.851 ************************************ 00:05:51.851 06:04:11 -- spdk/autotest.sh@209 -- # uname -s 00:05:51.851 06:04:11 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:05:51.851 06:04:11 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:51.851 06:04:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:51.851 06:04:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.851 06:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:51.851 ************************************ 00:05:51.851 START TEST blockdev_nvme_gpt 00:05:51.851 ************************************ 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:51.851 * Looking for test storage... 00:05:51.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.851 06:04:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.851 --rc genhtml_branch_coverage=1 00:05:51.851 --rc genhtml_function_coverage=1 00:05:51.851 --rc genhtml_legend=1 00:05:51.851 --rc geninfo_all_blocks=1 00:05:51.851 --rc geninfo_unexecuted_blocks=1 00:05:51.851 00:05:51.851 ' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.851 --rc genhtml_branch_coverage=1 00:05:51.851 --rc genhtml_function_coverage=1 00:05:51.851 --rc genhtml_legend=1 00:05:51.851 --rc geninfo_all_blocks=1 00:05:51.851 --rc geninfo_unexecuted_blocks=1 00:05:51.851 00:05:51.851 ' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.851 --rc genhtml_branch_coverage=1 00:05:51.851 --rc genhtml_function_coverage=1 00:05:51.851 --rc genhtml_legend=1 00:05:51.851 --rc geninfo_all_blocks=1 00:05:51.851 --rc geninfo_unexecuted_blocks=1 00:05:51.851 00:05:51.851 ' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.851 --rc genhtml_branch_coverage=1 00:05:51.851 --rc genhtml_function_coverage=1 00:05:51.851 --rc genhtml_legend=1 00:05:51.851 --rc geninfo_all_blocks=1 00:05:51.851 --rc geninfo_unexecuted_blocks=1 00:05:51.851 00:05:51.851 ' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60814 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60814 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60814 ']' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.851 06:04:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:51.851 06:04:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 [2024-11-20 06:04:11.514282] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:05:52.109 [2024-11-20 06:04:11.514408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:05:52.109 [2024-11-20 06:04:11.671150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.366 [2024-11-20 06:04:11.770062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.930 06:04:12 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.930 06:04:12 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:05:52.930 06:04:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:52.930 06:04:12 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:05:52.930 06:04:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.225 Waiting for block devices as requested 00:05:53.225 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.505 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.505 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:53.763 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.025 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:05:59.025 BYT; 00:05:59.025 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:05:59.025 BYT; 00:05:59.025 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:05:59.025 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:59.026 06:04:18 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:05:59.026 06:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:05:59.961 The operation has completed successfully. 00:05:59.961 06:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:00.894 The operation has completed successfully. 00:06:00.894 06:04:20 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.787 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.787 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.788 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.788 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:01.788 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 [] 00:06:01.788 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.788 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:01.788 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:02.045 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.045 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:02.304 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.304 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:02.304 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:02.305 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d87b4110-0a31-4aaa-999a-86024a5c9788"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d87b4110-0a31-4aaa-999a-86024a5c9788",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "82e14fe7-a79d-4cf1-aa14-43a50fa0da7e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82e14fe7-a79d-4cf1-aa14-43a50fa0da7e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c0af9116-144c-454d-9d69-614c51e95df5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0af9116-144c-454d-9d69-614c51e95df5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cd85713d-8d8f-498d-979e-08117449f3ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd85713d-8d8f-498d-979e-08117449f3ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "809a87d5-47d8-4062-885e-277216658230"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "809a87d5-47d8-4062-885e-277216658230",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:02.305 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:02.305 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:02.305 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:02.305 06:04:21 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60814 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60814 ']' 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60814 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60814 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.305 killing process with pid 60814 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60814' 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60814 00:06:02.305 06:04:21 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60814 00:06:03.678 06:04:23 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:03.678 06:04:23 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:03.678 06:04:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:03.678 06:04:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.678 06:04:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:03.678 ************************************ 00:06:03.678 START TEST bdev_hello_world 00:06:03.678 ************************************ 00:06:03.679 06:04:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:03.936 [2024-11-20 06:04:23.340097] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:03.936 [2024-11-20 06:04:23.340238] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:06:03.936 [2024-11-20 06:04:23.507158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.194 [2024-11-20 06:04:23.607865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.759 [2024-11-20 06:04:24.152745] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:04.759 [2024-11-20 06:04:24.152791] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:04.759 [2024-11-20 06:04:24.152818] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:04.759 [2024-11-20 06:04:24.155344] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:04.759 [2024-11-20 06:04:24.156022] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:04.759 [2024-11-20 06:04:24.156052] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:04.759 [2024-11-20 06:04:24.156206] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:04.759 00:06:04.759 [2024-11-20 06:04:24.156225] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:05.324 00:06:05.324 real 0m1.585s 00:06:05.324 user 0m1.298s 00:06:05.324 sys 0m0.181s 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:05.324 ************************************ 00:06:05.324 END TEST bdev_hello_world 00:06:05.324 ************************************ 00:06:05.324 06:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:05.324 06:04:24 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:05.324 06:04:24 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.324 06:04:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.324 ************************************ 00:06:05.324 START TEST bdev_bounds 00:06:05.324 ************************************ 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61469 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.324 Process bdevio pid: 61469 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61469' 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61469 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61469 ']' 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:05.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:05.324 06:04:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:05.582 [2024-11-20 06:04:24.963183] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:05.582 [2024-11-20 06:04:24.963298] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61469 ] 00:06:05.582 [2024-11-20 06:04:25.118917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.840 [2024-11-20 06:04:25.222513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.840 [2024-11-20 06:04:25.222599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.840 [2024-11-20 06:04:25.222889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.405 06:04:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.405 06:04:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:06.405 06:04:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:06.405 I/O targets: 00:06:06.405 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:06.405 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:06.405 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:06.405 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.405 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.405 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.405 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:06.405 00:06:06.405 00:06:06.405 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.405 http://cunit.sourceforge.net/ 00:06:06.405 00:06:06.405 00:06:06.405 Suite: bdevio tests on: Nvme3n1 00:06:06.405 Test: blockdev write read block ...passed 00:06:06.405 Test: blockdev write zeroes read block ...passed 00:06:06.405 Test: blockdev write zeroes read no split ...passed 00:06:06.405 Test: blockdev write zeroes read split ...passed 00:06:06.405 Test: blockdev write zeroes read split partial ...passed 00:06:06.405 Test: blockdev reset ...[2024-11-20 06:04:25.944633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:06.405 [2024-11-20 06:04:25.947443] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:06.405 passed 00:06:06.405 Test: blockdev write read 8 blocks ...passed 00:06:06.405 Test: blockdev write read size > 128k ...passed 00:06:06.405 Test: blockdev write read invalid size ...passed 00:06:06.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.405 Test: blockdev write read max offset ...passed 00:06:06.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.405 Test: blockdev writev readv 8 blocks ...passed 00:06:06.405 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.405 Test: blockdev writev readv block ...passed 00:06:06.405 Test: blockdev writev readv size > 128k ...passed 00:06:06.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.405 Test: blockdev comparev and writev ...[2024-11-20 06:04:25.953294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bec04000 len:0x1000 00:06:06.405 [2024-11-20 06:04:25.953342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.405 passed 00:06:06.405 Test: blockdev nvme passthru rw ...passed 00:06:06.405 Test: blockdev nvme passthru vendor specific ...passed 00:06:06.405 Test: blockdev nvme admin passthru ...[2024-11-20 06:04:25.953960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.405 [2024-11-20 06:04:25.953990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.405 passed 00:06:06.405 Test: blockdev copy ...passed 00:06:06.405 Suite: bdevio tests on: Nvme2n3 00:06:06.405 Test: blockdev write read block ...passed 00:06:06.405 Test: blockdev write zeroes read block ...passed 00:06:06.405 Test: blockdev write zeroes read no split ...passed 00:06:06.405 Test: blockdev write zeroes read split ...passed 00:06:06.405 Test: blockdev write zeroes read split partial ...passed 00:06:06.405 Test: blockdev reset ...[2024-11-20 06:04:26.005944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.405 [2024-11-20 06:04:26.008949] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:06.405 passed 00:06:06.405 Test: blockdev write read 8 blocks ...passed 00:06:06.405 Test: blockdev write read size > 128k ...passed 00:06:06.405 Test: blockdev write read invalid size ...passed 00:06:06.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.405 Test: blockdev write read max offset ...passed 00:06:06.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.405 Test: blockdev writev readv 8 blocks ...passed 00:06:06.405 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.405 Test: blockdev writev readv block ...passed 00:06:06.405 Test: blockdev writev readv size > 128k ...passed 00:06:06.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.405 Test: blockdev comparev and writev ...[2024-11-20 06:04:26.017900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bec02000 len:0x1000 00:06:06.405 passed 00:06:06.405 Test: blockdev nvme passthru rw ...[2024-11-20 06:04:26.017943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.405 passed 00:06:06.405 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:04:26.018481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.405 passed 00:06:06.405 Test: blockdev nvme admin passthru ...[2024-11-20 06:04:26.018520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.405 passed 00:06:06.405 Test: blockdev copy ...passed 00:06:06.405 Suite: bdevio tests on: Nvme2n2 00:06:06.405 Test: blockdev write read block ...passed 00:06:06.405 Test: blockdev write zeroes read block ...passed 00:06:06.405 Test: blockdev write zeroes read no split ...passed 00:06:06.667 Test: blockdev write zeroes read split ...passed 00:06:06.667 Test: blockdev write zeroes read split partial ...passed 00:06:06.667 Test: blockdev reset ...[2024-11-20 06:04:26.071263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.667 [2024-11-20 06:04:26.074170] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:06.667 passed 00:06:06.667 Test: blockdev write read 8 blocks ...passed 00:06:06.667 Test: blockdev write read size > 128k ...passed 00:06:06.667 Test: blockdev write read invalid size ...passed 00:06:06.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.667 Test: blockdev write read max offset ...passed 00:06:06.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.667 Test: blockdev writev readv 8 blocks ...passed 00:06:06.667 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.667 Test: blockdev writev readv block ...passed 00:06:06.667 Test: blockdev writev readv size > 128k ...passed 00:06:06.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.667 Test: blockdev comparev and writev ...passed 00:06:06.667 Test: blockdev nvme passthru rw ...[2024-11-20 06:04:26.080102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4438000 len:0x1000 00:06:06.667 [2024-11-20 06:04:26.080138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.667 passed 00:06:06.667 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:04:26.080756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.667 [2024-11-20 06:04:26.080782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.667 passed 00:06:06.667 Test: blockdev nvme admin passthru ...passed 00:06:06.667 Test: blockdev copy ...passed 00:06:06.667 Suite: bdevio tests on: Nvme2n1 00:06:06.667 Test: blockdev write read block ...passed 00:06:06.667 Test: blockdev write zeroes read block ...passed 00:06:06.667 Test: blockdev write zeroes read no split ...passed 00:06:06.667 Test: blockdev write zeroes read split ...passed 00:06:06.668 Test: blockdev write zeroes read split partial ...passed 00:06:06.668 Test: blockdev reset ...[2024-11-20 06:04:26.122215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.668 passed 00:06:06.668 Test: blockdev write read 8 blocks ...[2024-11-20 06:04:26.125167] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:06.668 passed 00:06:06.668 Test: blockdev write read size > 128k ...passed 00:06:06.668 Test: blockdev write read invalid size ...passed 00:06:06.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.668 Test: blockdev write read max offset ...passed 00:06:06.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.668 Test: blockdev writev readv 8 blocks ...passed 00:06:06.668 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.668 Test: blockdev writev readv block ...passed 00:06:06.668 Test: blockdev writev readv size > 128k ...passed 00:06:06.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.668 Test: blockdev comparev and writev ...[2024-11-20 06:04:26.131304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4434000 len:0x1000 00:06:06.668 [2024-11-20 06:04:26.131341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.668 passed 00:06:06.668 Test: blockdev nvme passthru rw ...passed 00:06:06.668 Test: blockdev nvme passthru vendor specific ...passed 00:06:06.668 Test: blockdev nvme admin passthru ...[2024-11-20 06:04:26.131852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.668 [2024-11-20 06:04:26.131875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.668 passed 00:06:06.668 Test: blockdev copy ...passed 00:06:06.668 Suite: bdevio tests on: Nvme1n1p2 00:06:06.668 Test: blockdev write read block ...passed 00:06:06.668 Test: blockdev write zeroes read block ...passed 00:06:06.668 Test: blockdev write zeroes read no split ...passed 00:06:06.668 Test: blockdev write zeroes read split ...passed 00:06:06.668 Test: blockdev write zeroes read split partial ...passed 00:06:06.668 Test: blockdev reset ...[2024-11-20 06:04:26.174900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:06.668 [2024-11-20 06:04:26.177812] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:06.668 passed 00:06:06.668 Test: blockdev write read 8 blocks ...passed 00:06:06.668 Test: blockdev write read size > 128k ...passed 00:06:06.668 Test: blockdev write read invalid size ...passed 00:06:06.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.668 Test: blockdev write read max offset ...passed 00:06:06.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.668 Test: blockdev writev readv 8 blocks ...passed 00:06:06.668 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.668 Test: blockdev writev readv block ...passed 00:06:06.668 Test: blockdev writev readv size > 128k ...passed 00:06:06.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.668 Test: blockdev comparev and writev ...[2024-11-20 06:04:26.182867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c4430000 len:0x1000 00:06:06.668 [2024-11-20 06:04:26.182900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.668 passed 00:06:06.668 Test: blockdev nvme passthru rw ...passed 00:06:06.668 Test: blockdev nvme passthru vendor specific ...passed 00:06:06.668 Test: blockdev nvme admin passthru ...passed 00:06:06.668 Test: blockdev copy ...passed 00:06:06.668 Suite: bdevio tests on: Nvme1n1p1 00:06:06.668 Test: blockdev write read block ...passed 00:06:06.668 Test: blockdev write zeroes read block ...passed 00:06:06.668 Test: blockdev write zeroes read no split ...passed 00:06:06.668 Test: blockdev write zeroes read split ...passed 00:06:06.668 Test: blockdev write zeroes read split partial ...passed 00:06:06.668 Test: blockdev reset ...[2024-11-20 06:04:26.223405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:06.668 passed 00:06:06.668 Test: blockdev write read 8 blocks ...[2024-11-20 06:04:26.226029] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:06.668 passed 00:06:06.668 Test: blockdev write read size > 128k ...passed 00:06:06.668 Test: blockdev write read invalid size ...passed 00:06:06.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.668 Test: blockdev write read max offset ...passed 00:06:06.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.668 Test: blockdev writev readv 8 blocks ...passed 00:06:06.668 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.668 Test: blockdev writev readv block ...passed 00:06:06.668 Test: blockdev writev readv size > 128k ...passed 00:06:06.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.668 Test: blockdev comparev and writev ...[2024-11-20 06:04:26.231897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bea0e000 len:0x1000 00:06:06.668 [2024-11-20 06:04:26.231930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.668 passed 00:06:06.668 Test: blockdev nvme passthru rw ...passed 00:06:06.668 Test: blockdev nvme passthru vendor specific ...passed 00:06:06.668 Test: blockdev nvme admin passthru ...passed 00:06:06.668 Test: blockdev copy ...passed 00:06:06.668 Suite: bdevio tests on: Nvme0n1 00:06:06.668 Test: blockdev write read block ...passed 00:06:06.668 Test: blockdev write zeroes read block ...passed 00:06:06.668 Test: blockdev write zeroes read no split ...passed 00:06:06.668 Test: blockdev write zeroes read split ...passed 00:06:06.668 Test: blockdev write zeroes read split partial ...passed 00:06:06.668 Test: blockdev reset ...[2024-11-20 06:04:26.272896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:06.668 passed 00:06:06.668 Test: blockdev write read 8 blocks ...[2024-11-20 06:04:26.275519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:06.668 passed 00:06:06.668 Test: blockdev write read size > 128k ...passed 00:06:06.668 Test: blockdev write read invalid size ...passed 00:06:06.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.668 Test: blockdev write read max offset ...passed 00:06:06.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.668 Test: blockdev writev readv 8 blocks ...passed 00:06:06.668 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.668 Test: blockdev writev readv block ...passed 00:06:06.668 Test: blockdev writev readv size > 128k ...passed 00:06:06.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.668 Test: blockdev comparev and writev ...[2024-11-20 06:04:26.280536] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:06.668 separate metadata which is not supported yet. 00:06:06.668 passed 00:06:06.668 Test: blockdev nvme passthru rw ...passed 00:06:06.668 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:04:26.280899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:06.668 [2024-11-20 06:04:26.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:06.668 passed 00:06:06.668 Test: blockdev nvme admin passthru ...passed 00:06:06.668 Test: blockdev copy ...passed 00:06:06.668 00:06:06.668 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.668 suites 7 7 n/a 0 0 00:06:06.668 tests 161 161 161 0 0 00:06:06.668 asserts 1025 1025 1025 0 n/a 00:06:06.668 00:06:06.668 Elapsed time = 1.040 seconds 00:06:06.668 0 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61469 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61469 ']' 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61469 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61469 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.926 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.926 killing process with pid 61469 00:06:06.927 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61469' 00:06:06.927 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61469 00:06:06.927 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61469 00:06:07.493 06:04:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:07.493 00:06:07.493 real 0m2.083s 00:06:07.493 user 0m5.352s 00:06:07.493 sys 0m0.267s 00:06:07.493 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.493 06:04:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:07.493 ************************************ 00:06:07.493 END TEST bdev_bounds 00:06:07.493 ************************************ 00:06:07.493 06:04:27 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.493 06:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:07.493 06:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.493 06:04:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.493 ************************************ 00:06:07.493 START TEST bdev_nbd 00:06:07.493 ************************************ 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61523 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61523 /var/tmp/spdk-nbd.sock 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61523 ']' 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.493 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.494 06:04:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:07.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.494 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.494 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.494 06:04:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:07.494 [2024-11-20 06:04:27.092978] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:07.494 [2024-11-20 06:04:27.093093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.771 [2024-11-20 06:04:27.251169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.771 [2024-11-20 06:04:27.352610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:08.394 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:08.651 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:08.651 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:08.651 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.652 1+0 records in 00:06:08.652 1+0 records out 00:06:08.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332214 s, 12.3 MB/s 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:08.652 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.910 1+0 records in 00:06:08.910 1+0 records out 00:06:08.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388251 s, 10.5 MB/s 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:08.910 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.168 1+0 records in 00:06:09.168 1+0 records out 00:06:09.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444554 s, 9.2 MB/s 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:09.168 06:04:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.428 1+0 records in 00:06:09.428 1+0 records out 00:06:09.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346928 s, 11.8 MB/s 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:09.428 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:09.685 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.686 1+0 records in 00:06:09.686 1+0 records out 00:06:09.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328956 s, 12.5 MB/s 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:09.686 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.944 1+0 records in 00:06:09.944 1+0 records out 00:06:09.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440209 s, 9.3 MB/s 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:09.944 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.201 1+0 records in 00:06:10.201 1+0 records out 00:06:10.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452437 s, 9.1 MB/s 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:10.201 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.458 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:10.458 { 00:06:10.458 "nbd_device": "/dev/nbd0", 00:06:10.458 "bdev_name": "Nvme0n1" 00:06:10.458 }, 00:06:10.458 { 00:06:10.458 "nbd_device": "/dev/nbd1", 00:06:10.459 "bdev_name": "Nvme1n1p1" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd2", 00:06:10.459 "bdev_name": "Nvme1n1p2" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd3", 00:06:10.459 "bdev_name": "Nvme2n1" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd4", 00:06:10.459 "bdev_name": "Nvme2n2" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd5", 00:06:10.459 "bdev_name": "Nvme2n3" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd6", 00:06:10.459 "bdev_name": "Nvme3n1" 00:06:10.459 } 00:06:10.459 ]' 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd0", 00:06:10.459 "bdev_name": "Nvme0n1" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd1", 00:06:10.459 "bdev_name": "Nvme1n1p1" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd2", 00:06:10.459 "bdev_name": "Nvme1n1p2" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd3", 00:06:10.459 "bdev_name": "Nvme2n1" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd4", 00:06:10.459 "bdev_name": "Nvme2n2" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd5", 00:06:10.459 "bdev_name": "Nvme2n3" 00:06:10.459 }, 00:06:10.459 { 00:06:10.459 "nbd_device": "/dev/nbd6", 00:06:10.459 "bdev_name": "Nvme3n1" 00:06:10.459 } 00:06:10.459 ]' 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.459 06:04:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.716 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.034 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.294 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.561 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:11.562 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.562 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.562 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.562 06:04:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.562 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:11.825 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:11.825 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:11.825 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:11.825 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.825 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.826 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:12.086 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:12.087 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:12.348 /dev/nbd0 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.348 1+0 records in 00:06:12.348 1+0 records out 00:06:12.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893139 s, 4.6 MB/s 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:12.348 06:04:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:12.608 /dev/nbd1 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.608 1+0 records in 00:06:12.608 1+0 records out 00:06:12.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714732 s, 5.7 MB/s 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:12.608 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:12.868 /dev/nbd10 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.868 1+0 records in 00:06:12.868 1+0 records out 00:06:12.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684741 s, 6.0 MB/s 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:12.868 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:13.126 /dev/nbd11 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.126 1+0 records in 00:06:13.126 1+0 records out 00:06:13.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830205 s, 4.9 MB/s 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:13.126 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:13.126 /dev/nbd12 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.386 1+0 records in 00:06:13.386 1+0 records out 00:06:13.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126104 s, 3.2 MB/s 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:13.386 /dev/nbd13 00:06:13.386 06:04:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.386 1+0 records in 00:06:13.386 1+0 records out 00:06:13.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053299 s, 7.7 MB/s 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:13.386 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:13.646 /dev/nbd14 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.646 1+0 records in 00:06:13.646 1+0 records out 00:06:13.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983848 s, 4.2 MB/s 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.646 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.035 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd0", 00:06:14.035 "bdev_name": "Nvme0n1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd1", 00:06:14.035 "bdev_name": "Nvme1n1p1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd10", 00:06:14.035 "bdev_name": "Nvme1n1p2" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd11", 00:06:14.035 "bdev_name": "Nvme2n1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd12", 00:06:14.035 "bdev_name": "Nvme2n2" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd13", 00:06:14.035 "bdev_name": "Nvme2n3" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd14", 00:06:14.035 "bdev_name": "Nvme3n1" 00:06:14.035 } 00:06:14.035 ]' 00:06:14.035 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.035 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd0", 00:06:14.035 "bdev_name": "Nvme0n1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd1", 00:06:14.035 "bdev_name": "Nvme1n1p1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd10", 00:06:14.035 "bdev_name": "Nvme1n1p2" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd11", 00:06:14.035 "bdev_name": "Nvme2n1" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd12", 00:06:14.035 "bdev_name": "Nvme2n2" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd13", 00:06:14.035 "bdev_name": "Nvme2n3" 00:06:14.035 }, 00:06:14.035 { 00:06:14.035 "nbd_device": "/dev/nbd14", 00:06:14.035 "bdev_name": "Nvme3n1" 00:06:14.035 } 00:06:14.035 ]' 00:06:14.035 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.035 /dev/nbd1 00:06:14.035 /dev/nbd10 00:06:14.035 /dev/nbd11 00:06:14.035 /dev/nbd12 00:06:14.035 /dev/nbd13 00:06:14.035 /dev/nbd14' 00:06:14.035 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.035 /dev/nbd1 00:06:14.036 /dev/nbd10 00:06:14.036 /dev/nbd11 00:06:14.036 /dev/nbd12 00:06:14.036 /dev/nbd13 00:06:14.036 /dev/nbd14' 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:14.036 256+0 records in 00:06:14.036 256+0 records out 00:06:14.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712016 s, 147 MB/s 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.036 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.305 256+0 records in 00:06:14.305 256+0 records out 00:06:14.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187827 s, 5.6 MB/s 00:06:14.305 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.305 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.305 256+0 records in 00:06:14.305 256+0 records out 00:06:14.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18708 s, 5.6 MB/s 00:06:14.305 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.305 06:04:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:14.564 256+0 records in 00:06:14.564 256+0 records out 00:06:14.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209385 s, 5.0 MB/s 00:06:14.564 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.564 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:14.826 256+0 records in 00:06:14.826 256+0 records out 00:06:14.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152398 s, 6.9 MB/s 00:06:14.826 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.826 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:15.085 256+0 records in 00:06:15.085 256+0 records out 00:06:15.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1888 s, 5.6 MB/s 00:06:15.085 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.085 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:15.085 256+0 records in 00:06:15.085 256+0 records out 00:06:15.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.217364 s, 4.8 MB/s 00:06:15.085 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.085 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:15.346 256+0 records in 00:06:15.346 256+0 records out 00:06:15.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194377 s, 5.4 MB/s 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.346 06:04:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.606 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.864 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.865 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.122 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:16.379 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.380 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.380 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.380 06:04:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.638 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.897 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:17.154 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:17.412 malloc_lvol_verify 00:06:17.412 06:04:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:17.671 260c5fad-5fe1-46c7-ab8d-7ea231d7d745 00:06:17.671 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:17.929 8c847c8d-23a7-42bd-80af-7c7747732b65 00:06:17.929 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:17.929 /dev/nbd0 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:18.187 mke2fs 1.47.0 (5-Feb-2023) 00:06:18.187 Discarding device blocks: 0/4096 done 00:06:18.187 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:18.187 00:06:18.187 Allocating group tables: 0/1 done 00:06:18.187 Writing inode tables: 0/1 done 00:06:18.187 Creating journal (1024 blocks): done 00:06:18.187 Writing superblocks and filesystem accounting information: 0/1 done 00:06:18.187 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.187 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61523 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61523 ']' 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61523 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61523 00:06:18.445 killing process with pid 61523 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61523' 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61523 00:06:18.445 06:04:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61523 00:06:19.379 ************************************ 00:06:19.379 END TEST bdev_nbd 00:06:19.379 ************************************ 00:06:19.379 06:04:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:19.379 00:06:19.379 real 0m11.626s 00:06:19.379 user 0m16.063s 00:06:19.379 sys 0m3.734s 00:06:19.379 06:04:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.379 06:04:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:19.380 skipping fio tests on NVMe due to multi-ns failures. 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:19.380 06:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:19.380 06:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:19.380 06:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.380 06:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:19.380 ************************************ 00:06:19.380 START TEST bdev_verify 00:06:19.380 ************************************ 00:06:19.380 06:04:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:19.380 [2024-11-20 06:04:38.752139] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:19.380 [2024-11-20 06:04:38.752415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:06:19.380 [2024-11-20 06:04:38.913660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.637 [2024-11-20 06:04:39.016330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.637 [2024-11-20 06:04:39.016453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.202 Running I/O for 5 seconds... 00:06:22.512 22080.00 IOPS, 86.25 MiB/s [2024-11-20T06:04:43.187Z] 20800.00 IOPS, 81.25 MiB/s [2024-11-20T06:04:44.121Z] 20800.00 IOPS, 81.25 MiB/s [2024-11-20T06:04:45.053Z] 20576.00 IOPS, 80.38 MiB/s [2024-11-20T06:04:45.053Z] 20812.80 IOPS, 81.30 MiB/s 00:06:25.420 Latency(us) 00:06:25.420 [2024-11-20T06:04:45.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0xbd0bd 00:06:25.421 Nvme0n1 : 5.08 1460.36 5.70 0.00 0.00 87464.26 13208.02 103244.41 00:06:25.421 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:25.421 Nvme0n1 : 5.07 1489.47 5.82 0.00 0.00 85742.00 14216.27 98001.53 00:06:25.421 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x4ff80 00:06:25.421 Nvme1n1p1 : 5.09 1459.94 5.70 0.00 0.00 87278.93 13208.02 97194.93 00:06:25.421 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:25.421 Nvme1n1p1 : 5.07 1489.03 5.82 0.00 0.00 85641.23 14115.45 94775.14 00:06:25.421 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x4ff7f 00:06:25.421 Nvme1n1p2 : 5.09 1459.52 5.70 0.00 0.00 87136.88 11544.42 95985.03 00:06:25.421 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:25.421 Nvme1n1p2 : 5.07 1488.60 5.81 0.00 0.00 85475.85 14115.45 93161.94 00:06:25.421 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x80000 00:06:25.421 Nvme2n1 : 5.09 1458.66 5.70 0.00 0.00 86993.67 13510.50 96388.33 00:06:25.421 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x80000 length 0x80000 00:06:25.421 Nvme2n1 : 5.07 1488.18 5.81 0.00 0.00 85324.84 14518.74 91548.75 00:06:25.421 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x80000 00:06:25.421 Nvme2n2 : 5.09 1458.28 5.70 0.00 0.00 86834.33 13812.97 100018.02 00:06:25.421 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x80000 length 0x80000 00:06:25.421 Nvme2n2 : 5.08 1487.77 5.81 0.00 0.00 85175.13 14619.57 91952.05 00:06:25.421 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x80000 00:06:25.421 Nvme2n3 : 5.09 1457.90 5.69 0.00 0.00 86678.07 14014.62 102034.51 00:06:25.421 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x80000 length 0x80000 00:06:25.421 Nvme2n3 : 5.08 1487.35 5.81 0.00 0.00 85015.30 14821.22 93565.24 00:06:25.421 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x0 length 0x20000 00:06:25.421 Nvme3n1 : 5.09 1457.52 5.69 0.00 0.00 86531.99 13611.32 102437.81 00:06:25.421 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.421 Verification LBA range: start 0x20000 length 0x20000 00:06:25.421 Nvme3n1 : 5.08 1486.93 5.81 0.00 0.00 84875.82 10284.11 98001.53 00:06:25.421 [2024-11-20T06:04:45.054Z] =================================================================================================================== 00:06:25.421 [2024-11-20T06:04:45.054Z] Total : 20629.51 80.58 0.00 0.00 86147.75 10284.11 103244.41 00:06:26.361 00:06:26.361 real 0m7.188s 00:06:26.361 user 0m13.466s 00:06:26.361 sys 0m0.218s 00:06:26.361 06:04:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.361 ************************************ 00:06:26.361 END TEST bdev_verify 00:06:26.361 ************************************ 00:06:26.361 06:04:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.361 06:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.361 06:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:26.361 06:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.361 06:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:26.361 ************************************ 00:06:26.361 START TEST bdev_verify_big_io 00:06:26.361 ************************************ 00:06:26.362 06:04:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.623 [2024-11-20 06:04:46.013526] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:26.623 [2024-11-20 06:04:46.013802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:06:26.623 [2024-11-20 06:04:46.173897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.885 [2024-11-20 06:04:46.278509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.885 [2024-11-20 06:04:46.278545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.454 Running I/O for 5 seconds... 00:06:33.291 699.00 IOPS, 43.69 MiB/s [2024-11-20T06:04:53.184Z] 2382.00 IOPS, 148.88 MiB/s [2024-11-20T06:04:53.184Z] 2932.67 IOPS, 183.29 MiB/s 00:06:33.551 Latency(us) 00:06:33.551 [2024-11-20T06:04:53.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.551 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0xbd0b 00:06:33.551 Nvme0n1 : 5.79 104.81 6.55 0.00 0.00 1172572.59 20064.10 1245385.65 00:06:33.551 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:33.551 Nvme0n1 : 5.94 99.63 6.23 0.00 0.00 1233575.33 9931.22 1290555.08 00:06:33.551 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x4ff8 00:06:33.551 Nvme1n1p1 : 5.94 99.66 6.23 0.00 0.00 1165587.05 65334.35 1122782.92 00:06:33.551 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:33.551 Nvme1n1p1 : 6.11 97.36 6.08 0.00 0.00 1218415.49 52428.80 1832588.21 00:06:33.551 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x4ff7 00:06:33.551 Nvme1n1p2 : 6.12 70.56 4.41 0.00 0.00 1636331.23 140347.86 2335904.69 00:06:33.551 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:33.551 Nvme1n1p2 : 6.03 98.14 6.13 0.00 0.00 1182403.99 75013.51 1871304.86 00:06:33.551 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x8000 00:06:33.551 Nvme2n1 : 6.07 111.00 6.94 0.00 0.00 1010636.92 87112.47 1426063.36 00:06:33.551 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x8000 length 0x8000 00:06:33.551 Nvme2n1 : 6.03 98.10 6.13 0.00 0.00 1141053.29 85902.57 1910021.51 00:06:33.551 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x8000 00:06:33.551 Nvme2n2 : 6.08 115.88 7.24 0.00 0.00 947345.79 40531.50 1084066.26 00:06:33.551 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x8000 length 0x8000 00:06:33.551 Nvme2n2 : 6.11 101.91 6.37 0.00 0.00 1062726.73 75820.11 1935832.62 00:06:33.551 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x8000 00:06:33.551 Nvme2n3 : 6.12 120.36 7.52 0.00 0.00 885487.53 35691.91 1116330.14 00:06:33.551 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x8000 length 0x8000 00:06:33.551 Nvme2n3 : 6.16 111.98 7.00 0.00 0.00 938408.38 15930.29 1974549.27 00:06:33.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x0 length 0x2000 00:06:33.551 Nvme3n1 : 6.13 130.08 8.13 0.00 0.00 797034.73 5873.03 1155046.79 00:06:33.551 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.551 Verification LBA range: start 0x2000 length 0x2000 00:06:33.551 Nvme3n1 : 6.19 142.08 8.88 0.00 0.00 720093.24 652.21 1471232.79 00:06:33.551 [2024-11-20T06:04:53.185Z] =================================================================================================================== 00:06:33.552 [2024-11-20T06:04:53.185Z] Total : 1501.54 93.85 0.00 0.00 1045213.43 652.21 2335904.69 00:06:35.497 00:06:35.497 real 0m8.747s 00:06:35.497 user 0m16.544s 00:06:35.497 sys 0m0.234s 00:06:35.497 06:04:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.497 06:04:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:35.497 ************************************ 00:06:35.497 END TEST bdev_verify_big_io 00:06:35.497 ************************************ 00:06:35.497 06:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.497 06:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:35.497 06:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.497 06:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.497 ************************************ 00:06:35.497 START TEST bdev_write_zeroes 00:06:35.497 ************************************ 00:06:35.497 06:04:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.497 [2024-11-20 06:04:54.838384] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:35.497 [2024-11-20 06:04:54.838525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62152 ] 00:06:35.497 [2024-11-20 06:04:54.998057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.497 [2024-11-20 06:04:55.101835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.072 Running I/O for 1 seconds... 00:06:37.454 31154.00 IOPS, 121.70 MiB/s 00:06:37.454 Latency(us) 00:06:37.454 [2024-11-20T06:04:57.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.454 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme0n1 : 1.04 4307.86 16.83 0.00 0.00 29634.49 6604.01 416204.01 00:06:37.454 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme1n1p1 : 1.04 4512.61 17.63 0.00 0.00 28252.74 11141.12 374260.97 00:06:37.454 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme1n1p2 : 1.04 4482.07 17.51 0.00 0.00 28285.34 11897.30 367808.20 00:06:37.454 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme2n1 : 1.04 4474.10 17.48 0.00 0.00 28289.19 12098.95 374260.97 00:06:37.454 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme2n2 : 1.05 4469.03 17.46 0.00 0.00 28210.81 10989.88 374260.97 00:06:37.454 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme2n3 : 1.05 4463.98 17.44 0.00 0.00 28154.47 8217.21 374260.97 00:06:37.454 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.454 Nvme3n1 : 1.05 4397.89 17.18 0.00 0.00 28534.20 12048.54 374260.97 00:06:37.454 [2024-11-20T06:04:57.087Z] =================================================================================================================== 00:06:37.454 [2024-11-20T06:04:57.087Z] Total : 31107.53 121.51 0.00 0.00 28473.10 6604.01 416204.01 00:06:38.022 ************************************ 00:06:38.022 END TEST bdev_write_zeroes 00:06:38.022 ************************************ 00:06:38.022 00:06:38.022 real 0m2.834s 00:06:38.022 user 0m2.505s 00:06:38.022 sys 0m0.209s 00:06:38.022 06:04:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.022 06:04:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:38.022 06:04:57 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.022 06:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:38.022 06:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.022 06:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.280 ************************************ 00:06:38.280 START TEST bdev_json_nonenclosed 00:06:38.280 ************************************ 00:06:38.280 06:04:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.280 [2024-11-20 06:04:57.719814] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:38.280 [2024-11-20 06:04:57.719922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62206 ] 00:06:38.280 [2024-11-20 06:04:57.881081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.537 [2024-11-20 06:04:57.977284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.537 [2024-11-20 06:04:57.977369] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:38.537 [2024-11-20 06:04:57.977386] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:38.537 [2024-11-20 06:04:57.977395] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.537 00:06:38.537 real 0m0.499s 00:06:38.537 user 0m0.301s 00:06:38.537 sys 0m0.094s 00:06:38.537 06:04:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.537 06:04:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:38.537 ************************************ 00:06:38.537 END TEST bdev_json_nonenclosed 00:06:38.537 ************************************ 00:06:38.796 06:04:58 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.796 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:38.796 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.796 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 ************************************ 00:06:38.796 START TEST bdev_json_nonarray 00:06:38.796 ************************************ 00:06:38.796 06:04:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.796 [2024-11-20 06:04:58.262261] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:38.796 [2024-11-20 06:04:58.262373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62236 ] 00:06:38.796 [2024-11-20 06:04:58.421779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.056 [2024-11-20 06:04:58.523411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.056 [2024-11-20 06:04:58.523519] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:39.056 [2024-11-20 06:04:58.523537] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:39.056 [2024-11-20 06:04:58.523546] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.318 00:06:39.318 real 0m0.505s 00:06:39.318 user 0m0.312s 00:06:39.318 sys 0m0.089s 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.318 ************************************ 00:06:39.318 END TEST bdev_json_nonarray 00:06:39.318 ************************************ 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:39.318 06:04:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:39.318 06:04:58 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:39.318 06:04:58 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:39.318 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.318 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.318 06:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.318 ************************************ 00:06:39.318 START TEST bdev_gpt_uuid 00:06:39.318 ************************************ 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62257 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62257 00:06:39.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62257 ']' 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.318 06:04:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:39.318 [2024-11-20 06:04:58.847782] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:39.318 [2024-11-20 06:04:58.848389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62257 ] 00:06:39.584 [2024-11-20 06:04:59.006231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.584 [2024-11-20 06:04:59.108014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.169 06:04:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.169 06:04:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:06:40.169 06:04:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.169 06:04:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.169 06:04:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:40.426 Some configs were skipped because the RPC state that can call them passed over. 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.426 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:40.426 { 00:06:40.426 "name": "Nvme1n1p1", 00:06:40.426 "aliases": [ 00:06:40.426 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:40.426 ], 00:06:40.426 "product_name": "GPT Disk", 00:06:40.426 "block_size": 4096, 00:06:40.426 "num_blocks": 655104, 00:06:40.426 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:40.426 "assigned_rate_limits": { 00:06:40.426 "rw_ios_per_sec": 0, 00:06:40.426 "rw_mbytes_per_sec": 0, 00:06:40.426 "r_mbytes_per_sec": 0, 00:06:40.426 "w_mbytes_per_sec": 0 00:06:40.426 }, 00:06:40.426 "claimed": false, 00:06:40.426 "zoned": false, 00:06:40.426 "supported_io_types": { 00:06:40.426 "read": true, 00:06:40.426 "write": true, 00:06:40.426 "unmap": true, 00:06:40.426 "flush": true, 00:06:40.426 "reset": true, 00:06:40.426 "nvme_admin": false, 00:06:40.426 "nvme_io": false, 00:06:40.426 "nvme_io_md": false, 00:06:40.426 "write_zeroes": true, 00:06:40.426 "zcopy": false, 00:06:40.426 "get_zone_info": false, 00:06:40.426 "zone_management": false, 00:06:40.426 "zone_append": false, 00:06:40.426 "compare": true, 00:06:40.426 "compare_and_write": false, 00:06:40.426 "abort": true, 00:06:40.426 "seek_hole": false, 00:06:40.426 "seek_data": false, 00:06:40.427 "copy": true, 00:06:40.427 "nvme_iov_md": false 00:06:40.427 }, 00:06:40.427 "driver_specific": { 00:06:40.427 "gpt": { 00:06:40.427 "base_bdev": "Nvme1n1", 00:06:40.427 "offset_blocks": 256, 00:06:40.427 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:40.427 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:40.427 "partition_name": "SPDK_TEST_first" 00:06:40.427 } 00:06:40.427 } 00:06:40.427 } 00:06:40.427 ]' 00:06:40.427 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.684 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:40.684 { 00:06:40.684 "name": "Nvme1n1p2", 00:06:40.684 "aliases": [ 00:06:40.684 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:40.684 ], 00:06:40.684 "product_name": "GPT Disk", 00:06:40.684 "block_size": 4096, 00:06:40.684 "num_blocks": 655103, 00:06:40.684 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:40.684 "assigned_rate_limits": { 00:06:40.684 "rw_ios_per_sec": 0, 00:06:40.684 "rw_mbytes_per_sec": 0, 00:06:40.684 "r_mbytes_per_sec": 0, 00:06:40.684 "w_mbytes_per_sec": 0 00:06:40.684 }, 00:06:40.684 "claimed": false, 00:06:40.684 "zoned": false, 00:06:40.684 "supported_io_types": { 00:06:40.684 "read": true, 00:06:40.684 "write": true, 00:06:40.684 "unmap": true, 00:06:40.684 "flush": true, 00:06:40.684 "reset": true, 00:06:40.684 "nvme_admin": false, 00:06:40.684 "nvme_io": false, 00:06:40.685 "nvme_io_md": false, 00:06:40.685 "write_zeroes": true, 00:06:40.685 "zcopy": false, 00:06:40.685 "get_zone_info": false, 00:06:40.685 "zone_management": false, 00:06:40.685 "zone_append": false, 00:06:40.685 "compare": true, 00:06:40.685 "compare_and_write": false, 00:06:40.685 "abort": true, 00:06:40.685 "seek_hole": false, 00:06:40.685 "seek_data": false, 00:06:40.685 "copy": true, 00:06:40.685 "nvme_iov_md": false 00:06:40.685 }, 00:06:40.685 "driver_specific": { 00:06:40.685 "gpt": { 00:06:40.685 "base_bdev": "Nvme1n1", 00:06:40.685 "offset_blocks": 655360, 00:06:40.685 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:40.685 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:40.685 "partition_name": "SPDK_TEST_second" 00:06:40.685 } 00:06:40.685 } 00:06:40.685 } 00:06:40.685 ]' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62257 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62257 ']' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62257 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62257 00:06:40.685 killing process with pid 62257 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62257' 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62257 00:06:40.685 06:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62257 00:06:42.652 00:06:42.652 real 0m2.978s 00:06:42.652 user 0m3.082s 00:06:42.652 sys 0m0.392s 00:06:42.652 ************************************ 00:06:42.652 END TEST bdev_gpt_uuid 00:06:42.652 ************************************ 00:06:42.652 06:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.652 06:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:42.652 06:05:01 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.652 Waiting for block devices as requested 00:06:42.913 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.913 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.913 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.913 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.244 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:48.244 06:05:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:48.244 06:05:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:48.815 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:48.815 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:48.815 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:48.815 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:48.815 06:05:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:48.815 00:06:48.815 real 0m56.914s 00:06:48.815 user 1m11.845s 00:06:48.815 sys 0m7.899s 00:06:48.815 06:05:08 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.815 ************************************ 00:06:48.815 END TEST blockdev_nvme_gpt 00:06:48.815 ************************************ 00:06:48.815 06:05:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.815 06:05:08 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:48.815 06:05:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.815 06:05:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.815 06:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:48.815 ************************************ 00:06:48.815 START TEST nvme 00:06:48.815 ************************************ 00:06:48.815 06:05:08 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:48.815 * Looking for test storage... 00:06:48.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:48.815 06:05:08 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.815 06:05:08 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.815 06:05:08 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.815 06:05:08 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.815 06:05:08 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.816 06:05:08 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.816 06:05:08 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.816 06:05:08 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.816 06:05:08 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.816 06:05:08 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.816 06:05:08 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:48.816 06:05:08 nvme -- scripts/common.sh@345 -- # : 1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.816 06:05:08 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.816 06:05:08 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@353 -- # local d=1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.816 06:05:08 nvme -- scripts/common.sh@355 -- # echo 1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.816 06:05:08 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@353 -- # local d=2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.816 06:05:08 nvme -- scripts/common.sh@355 -- # echo 2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.816 06:05:08 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.816 06:05:08 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.816 06:05:08 nvme -- scripts/common.sh@368 -- # return 0 00:06:48.816 06:05:08 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.816 06:05:08 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.816 --rc genhtml_branch_coverage=1 00:06:48.816 --rc genhtml_function_coverage=1 00:06:48.816 --rc genhtml_legend=1 00:06:48.816 --rc geninfo_all_blocks=1 00:06:48.816 --rc geninfo_unexecuted_blocks=1 00:06:48.816 00:06:48.816 ' 00:06:48.816 06:05:08 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.816 --rc genhtml_branch_coverage=1 00:06:48.816 --rc genhtml_function_coverage=1 00:06:48.816 --rc genhtml_legend=1 00:06:48.816 --rc geninfo_all_blocks=1 00:06:48.816 --rc geninfo_unexecuted_blocks=1 00:06:48.816 00:06:48.816 ' 00:06:48.816 06:05:08 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.816 --rc genhtml_branch_coverage=1 00:06:48.816 --rc genhtml_function_coverage=1 00:06:48.816 --rc genhtml_legend=1 00:06:48.816 --rc geninfo_all_blocks=1 00:06:48.816 --rc geninfo_unexecuted_blocks=1 00:06:48.816 00:06:48.816 ' 00:06:48.816 06:05:08 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.816 --rc genhtml_branch_coverage=1 00:06:48.816 --rc genhtml_function_coverage=1 00:06:48.816 --rc genhtml_legend=1 00:06:48.816 --rc geninfo_all_blocks=1 00:06:48.816 --rc geninfo_unexecuted_blocks=1 00:06:48.816 00:06:48.816 ' 00:06:48.816 06:05:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:49.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.957 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.957 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.957 06:05:09 nvme -- nvme/nvme.sh@79 -- # uname 00:06:49.957 06:05:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:06:49.957 06:05:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:06:49.957 06:05:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:06:49.957 Waiting for stub to ready for secondary processes... 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1073 -- # stubpid=62900 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62900 ]] 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:06:49.957 06:05:09 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:06:49.957 [2024-11-20 06:05:09.544797] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:06:49.957 [2024-11-20 06:05:09.544922] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:06:50.898 [2024-11-20 06:05:10.324184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.898 [2024-11-20 06:05:10.422335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.898 [2024-11-20 06:05:10.422695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.898 [2024-11-20 06:05:10.422773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.898 [2024-11-20 06:05:10.435880] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:06:50.898 [2024-11-20 06:05:10.435914] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:50.898 [2024-11-20 06:05:10.445183] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:06:50.898 [2024-11-20 06:05:10.445299] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:06:50.898 [2024-11-20 06:05:10.449152] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:50.898 [2024-11-20 06:05:10.449346] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:06:50.898 [2024-11-20 06:05:10.449413] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:06:50.898 [2024-11-20 06:05:10.452054] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:50.898 [2024-11-20 06:05:10.452348] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:06:50.898 [2024-11-20 06:05:10.452422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:06:50.898 [2024-11-20 06:05:10.455108] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:50.898 [2024-11-20 06:05:10.455306] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:06:50.898 [2024-11-20 06:05:10.455428] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:06:50.898 [2024-11-20 06:05:10.455529] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:06:50.898 [2024-11-20 06:05:10.455578] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:06:50.898 06:05:10 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:50.898 done. 00:06:50.898 06:05:10 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:06:50.898 06:05:10 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:50.898 06:05:10 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:06:50.898 06:05:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.898 06:05:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.898 ************************************ 00:06:50.898 START TEST nvme_reset 00:06:50.898 ************************************ 00:06:50.898 06:05:10 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:51.155 Initializing NVMe Controllers 00:06:51.155 Skipping QEMU NVMe SSD at 0000:00:10.0 00:06:51.156 Skipping QEMU NVMe SSD at 0000:00:11.0 00:06:51.156 Skipping QEMU NVMe SSD at 0000:00:13.0 00:06:51.156 Skipping QEMU NVMe SSD at 0000:00:12.0 00:06:51.156 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:06:51.156 00:06:51.156 ************************************ 00:06:51.156 END TEST nvme_reset 00:06:51.156 ************************************ 00:06:51.156 real 0m0.226s 00:06:51.156 user 0m0.069s 00:06:51.156 sys 0m0.110s 00:06:51.156 06:05:10 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.156 06:05:10 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:06:51.156 06:05:10 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:06:51.156 06:05:10 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.156 06:05:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.156 06:05:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.156 ************************************ 00:06:51.156 START TEST nvme_identify 00:06:51.156 ************************************ 00:06:51.156 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:06:51.156 06:05:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:06:51.156 06:05:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:06:51.156 06:05:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:06:51.156 06:05:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:06:51.156 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:51.156 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:06:51.156 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.417 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.417 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:51.417 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:51.417 06:05:10 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:51.417 06:05:10 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:06:51.417 ===================================================== 00:06:51.417 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:51.417 ===================================================== 00:06:51.417 Controller Capabilities/Features 00:06:51.417 ================================ 00:06:51.417 Vendor ID: 1b36 00:06:51.417 Subsystem Vendor ID: 1af4 00:06:51.417 Serial Number: 12340 00:06:51.417 Model Number: QEMU NVMe Ctrl 00:06:51.417 Firmware Version: 8.0.0 00:06:51.417 Recommended Arb Burst: 6 00:06:51.417 IEEE OUI Identifier: 00 54 52 00:06:51.417 Multi-path I/O 00:06:51.417 May have multiple subsystem ports: No 00:06:51.417 May have multiple controllers: No 00:06:51.417 Associated with SR-IOV VF: No 00:06:51.417 Max Data Transfer Size: 524288 00:06:51.417 Max Number of Namespaces: 256 00:06:51.417 Max Number of I/O Queues: 64 00:06:51.417 NVMe Specification Version (VS): 1.4 00:06:51.417 NVMe Specification Version (Identify): 1.4 00:06:51.417 Maximum Queue Entries: 2048 00:06:51.417 Contiguous Queues Required: Yes 00:06:51.417 Arbitration Mechanisms Supported 00:06:51.417 Weighted Round Robin: Not Supported 00:06:51.417 Vendor Specific: Not Supported 00:06:51.417 Reset Timeout: 7500 ms 00:06:51.417 Doorbell Stride: 4 bytes 00:06:51.417 NVM Subsystem Reset: Not Supported 00:06:51.417 Command Sets Supported 00:06:51.417 NVM Command Set: Supported 00:06:51.417 Boot Partition: Not Supported 00:06:51.417 Memory Page Size Minimum: 4096 bytes 00:06:51.417 Memory Page Size Maximum: 65536 bytes 00:06:51.417 Persistent Memory Region: Not Supported 00:06:51.417 Optional Asynchronous Events Supported 00:06:51.417 Namespace Attribute Notices: Supported 00:06:51.417 Firmware Activation Notices: Not Supported 00:06:51.417 ANA Change Notices: Not Supported 00:06:51.417 PLE Aggregate Log Change Notices: Not Supported 00:06:51.417 LBA Status Info Alert Notices: Not Supported 00:06:51.417 EGE Aggregate Log Change Notices: Not Supported 00:06:51.417 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.417 Zone Descriptor Change Notices: Not Supported 00:06:51.417 Discovery Log Change Notices: Not Supported 00:06:51.417 Controller Attributes 00:06:51.417 128-bit Host Identifier: Not Supported 00:06:51.417 Non-Operational Permissive Mode: Not Supported 00:06:51.417 NVM Sets: Not Supported 00:06:51.417 Read Recovery Levels: Not Supported 00:06:51.417 Endurance Groups: Not Supported 00:06:51.417 Predictable Latency Mode: Not Supported 00:06:51.417 Traffic Based Keep ALive: Not Supported 00:06:51.417 Namespace Granularity: Not Supported 00:06:51.417 SQ Associations: Not Supported 00:06:51.417 UUID List: Not Supported 00:06:51.417 Multi-Domain Subsystem: Not Supported 00:06:51.417 Fixed Capacity Management: Not Supported 00:06:51.417 Variable Capacity Management: Not Supported 00:06:51.417 Delete Endurance Group: Not Supported 00:06:51.417 Delete NVM Set: Not Supported 00:06:51.417 Extended LBA Formats Supported: Supported 00:06:51.417 Flexible Data Placement Supported: Not Supported 00:06:51.417 00:06:51.417 Controller Memory Buffer Support 00:06:51.417 ================================ 00:06:51.417 Supported: No 00:06:51.417 00:06:51.417 Persistent Memory Region Support 00:06:51.417 ================================ 00:06:51.417 Supported: No 00:06:51.417 00:06:51.417 Admin Command Set Attributes 00:06:51.417 ============================ 00:06:51.417 Security Send/Receive: Not Supported 00:06:51.417 Format NVM: Supported 00:06:51.417 Firmware Activate/Download: Not Supported 00:06:51.417 Namespace Management: Supported 00:06:51.417 Device Self-Test: Not Supported 00:06:51.417 Directives: Supported 00:06:51.417 NVMe-MI: Not Supported 00:06:51.417 Virtualization Management: Not Supported 00:06:51.417 Doorbell Buffer Config: Supported 00:06:51.417 Get LBA Status Capability: Not Supported 00:06:51.417 Command & Feature Lockdown Capability: Not Supported 00:06:51.417 Abort Command Limit: 4 00:06:51.417 Async Event Request Limit: 4 00:06:51.417 Number of Firmware Slots: N/A 00:06:51.417 Firmware Slot 1 Read-Only: N/A 00:06:51.417 Firmware Activation Without Reset: N/A 00:06:51.417 Multiple Update Detection Support: N/A 00:06:51.417 Firmware Update Granularity: No Information Provided 00:06:51.417 Per-Namespace SMART Log: Yes 00:06:51.417 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.417 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:51.417 Command Effects Log Page: Supported 00:06:51.417 Get Log Page Extended Data: Supported 00:06:51.417 Telemetry Log Pages: Not Supported 00:06:51.417 Persistent Event Log Pages: Not Supported 00:06:51.417 Supported Log Pages Log Page: May Support 00:06:51.417 Commands Supported & Effects Log Page: Not Supported 00:06:51.417 Feature Identifiers & Effects Log Page:May Support 00:06:51.417 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.417 Data Area 4 for Telemetry Log: Not Supported 00:06:51.417 Error Log Page Entries Supported: 1 00:06:51.417 Keep Alive: Not Supported 00:06:51.417 00:06:51.417 NVM Command Set Attributes 00:06:51.417 ========================== 00:06:51.417 Submission Queue Entry Size 00:06:51.417 Max: 64 00:06:51.417 Min: 64 00:06:51.417 Completion Queue Entry Size 00:06:51.417 Max: 16 00:06:51.417 Min: 16 00:06:51.417 Number of Namespaces: 256 00:06:51.417 Compare Command: Supported 00:06:51.417 Write Uncorrectable Command: Not Supported 00:06:51.417 Dataset Management Command: Supported 00:06:51.417 Write Zeroes Command: Supported 00:06:51.417 Set Features Save Field: Supported 00:06:51.417 Reservations: Not Supported 00:06:51.417 Timestamp: Supported 00:06:51.417 Copy: Supported 00:06:51.417 Volatile Write Cache: Present 00:06:51.417 Atomic Write Unit (Normal): 1 00:06:51.417 Atomic Write Unit (PFail): 1 00:06:51.417 Atomic Compare & Write Unit: 1 00:06:51.417 Fused Compare & Write: Not Supported 00:06:51.417 Scatter-Gather List 00:06:51.417 SGL Command Set: Supported 00:06:51.417 SGL Keyed: Not Supported 00:06:51.417 SGL Bit Bucket Descriptor: Not Supported 00:06:51.417 SGL Metadata Pointer: Not Supported 00:06:51.417 Oversized SGL: Not Supported 00:06:51.417 SGL Metadata Address: Not Supported 00:06:51.417 SGL Offset: Not Supported 00:06:51.417 Transport SGL Data Block: Not Supported 00:06:51.417 Replay Protected Memory Block: Not Supported 00:06:51.417 00:06:51.417 Firmware Slot Information 00:06:51.417 ========================= 00:06:51.417 Active slot: 1 00:06:51.417 Slot 1 Firmware Revision: 1.0 00:06:51.417 00:06:51.417 00:06:51.417 Commands Supported and Effects 00:06:51.417 ============================== 00:06:51.417 Admin Commands 00:06:51.417 -------------- 00:06:51.417 Delete I/O Submission Queue (00h): Supported 00:06:51.417 Create I/O Submission Queue (01h): Supported 00:06:51.417 Get Log Page (02h): Supported 00:06:51.417 Delete I/O Completion Queue (04h): Supported 00:06:51.417 Create I/O Completion Queue (05h): Supported 00:06:51.417 Identify (06h): Supported 00:06:51.417 Abort (08h): Supported 00:06:51.417 Set Features (09h): Supported 00:06:51.417 Get Features (0Ah): Supported 00:06:51.417 Asynchronous Event Request (0Ch): Supported 00:06:51.417 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.417 Directive Send (19h): Supported 00:06:51.418 Directive Receive (1Ah): Supported 00:06:51.418 Virtualization Management (1Ch): Supported 00:06:51.418 Doorbell Buffer Config (7Ch): Supported 00:06:51.418 Format NVM (80h): Supported LBA-Change 00:06:51.418 I/O Commands 00:06:51.418 ------------ 00:06:51.418 Flush (00h): Supported LBA-Change 00:06:51.418 Write (01h): Supported LBA-Change 00:06:51.418 Read (02h): Supported 00:06:51.418 Compare (05h): Supported 00:06:51.418 Write Zeroes (08h): Supported LBA-Change 00:06:51.418 Dataset Management (09h): Supported LBA-Change 00:06:51.418 Unknown (0Ch): Supported 00:06:51.418 Unknown (12h): Supported 00:06:51.418 Copy (19h): Supported LBA-Change 00:06:51.418 Unknown (1Dh): Supported LBA-Change 00:06:51.418 00:06:51.418 Error Log 00:06:51.418 ========= 00:06:51.418 00:06:51.418 Arbitration 00:06:51.418 =========== 00:06:51.418 Arbitration Burst: no limit 00:06:51.418 00:06:51.418 Power Management 00:06:51.418 ================ 00:06:51.418 Number of Power States: 1 00:06:51.418 Current Power State: Power State #0 00:06:51.418 Power State #0: 00:06:51.418 Max Power: 25.00 W 00:06:51.418 Non-Operational State: Operational 00:06:51.418 Entry Latency: 16 microseconds 00:06:51.418 Exit Latency: 4 microseconds 00:06:51.418 Relative Read Throughput: 0 00:06:51.418 Relative Read Latency: 0 00:06:51.418 Relative Write Throughput: 0 00:06:51.418 Relative Write Latency: 0 00:06:51.418 Idle Power: Not Reported 00:06:51.418 Active Power: Not Reported 00:06:51.418 Non-Operational Permissive Mode: Not Supported 00:06:51.418 00:06:51.418 Health Information 00:06:51.418 ================== 00:06:51.418 Critical Warnings: 00:06:51.418 Available Spare Space: OK 00:06:51.418 Temperature: OK 00:06:51.418 Device Reliability: OK 00:06:51.418 Read Only: No 00:06:51.418 Volatile Memory Backup: OK 00:06:51.418 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.418 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.418 Available Spare: 0% 00:06:51.418 Available Spare Threshold: 0% 00:06:51.418 Life Percentage Used: 0% 00:06:51.418 Data Units Read: 709 00:06:51.418 Data Units Written: 637 00:06:51.418 Host Read Commands: 39041 00:06:51.418 Host Write Commands: 38827 00:06:51.418 Controller Busy Time: 0 minutes 00:06:51.418 Power Cycles: 0 00:06:51.418 Power On Hours: 0 hours 00:06:51.418 Unsafe Shutdowns: 0 00:06:51.418 Unrecoverable Media Errors: 0 00:06:51.418 Lifetime Error Log Entries: 0 00:06:51.418 Warning Temperature Time: 0 minutes 00:06:51.418 Critical Temperature Time: 0 minutes 00:06:51.418 00:06:51.418 Number of Queues 00:06:51.418 ================ 00:06:51.418 Number of I/O Submission Queues: 64 00:06:51.418 Number of I/O Completion Queues: 64 00:06:51.418 00:06:51.418 ZNS Specific Controller Data 00:06:51.418 ============================ 00:06:51.418 Zone Append Size Limit: 0 00:06:51.418 00:06:51.418 00:06:51.418 Active Namespaces 00:06:51.418 ================= 00:06:51.418 Namespace ID:1 00:06:51.418 Error Recovery Timeout: Unlimited 00:06:51.418 Command Set Identifier: NVM (00h) 00:06:51.418 Deallocate: Supported 00:06:51.418 Deallocated/Unwritten Error: Supported 00:06:51.418 Deallocated Read Value: All 0x00 00:06:51.418 Deallocate in Write Zeroes: Not Supported 00:06:51.418 Deallocated Guard Field: 0xFFFF 00:06:51.418 Flush: Supported 00:06:51.418 Reservation: Not Supported 00:06:51.418 Metadata Transferred as: Separate Metadata Buffer 00:06:51.418 Namespace Sharing Capabilities: Private 00:06:51.418 Size (in LBAs): 1548666 (5GiB) 00:06:51.418 Capacity (in LBAs): 1548666 (5GiB) 00:06:51.418 Utilization (in LBAs): 1548666 (5GiB) 00:06:51.418 Thin Provisioning: Not Supported 00:06:51.418 Per-NS Atomic Units: No 00:06:51.418 Maximum Single Source Range Length: 128 00:06:51.418 Maximum Copy Length: 128 00:06:51.418 Maximum Source Range Count: 128 00:06:51.418 NGUID/EUI64 Never Reused: No 00:06:51.418 Namespace Write Protected: No 00:06:51.418 Number of LBA Formats: 8 00:06:51.418 Current LBA Format: LBA Format #07 00:06:51.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.418 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.418 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.418 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.418 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.418 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.418 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.418 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.418 00:06:51.418 NVM Specific Namespace Data 00:06:51.418 =========================== 00:06:51.418 Logical Block Storage Tag Mask: 0 00:06:51.418 Protection Information Capabilities: 00:06:51.418 16b Guard Protection Information Storage Tag Support: No 00:06:51.418 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.418 Storage Tag Check Read Support: No 00:06:51.418 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.418 ===================================================== 00:06:51.418 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:51.418 ===================================================== 00:06:51.418 Controller Capabilities/Features 00:06:51.418 ================================ 00:06:51.418 Vendor ID: 1b36 00:06:51.418 Subsystem Vendor ID: 1af4 00:06:51.418 Serial Number: 12341 00:06:51.418 Model Number: QEMU NVMe Ctrl 00:06:51.418 Firmware Version: 8.0.0 00:06:51.418 Recommended Arb Burst: 6 00:06:51.418 IEEE OUI Identifier: 00 54 52 00:06:51.418 Multi-path I/O 00:06:51.418 May have multiple subsystem ports: No 00:06:51.418 May have multiple controllers: No 00:06:51.418 Associated with SR-IOV VF: No 00:06:51.418 Max Data Transfer Size: 524288 00:06:51.418 Max Number of Namespaces: 256 00:06:51.418 Max Number of I/O Queues: 64 00:06:51.418 NVMe Specification Version (VS): 1.4 00:06:51.418 NVMe Specification Version (Identify): 1.4 00:06:51.418 Maximum Queue Entries: 2048 00:06:51.418 Contiguous Queues Required: Yes 00:06:51.418 Arbitration Mechanisms Supported 00:06:51.418 Weighted Round Robin: Not Supported 00:06:51.418 Vendor Specific: Not Supported 00:06:51.418 Reset Timeout: 7500 ms 00:06:51.418 Doorbell Stride: 4 bytes 00:06:51.418 NVM Subsystem Reset: Not Supported 00:06:51.418 Command Sets Supported 00:06:51.418 NVM Command Set: Supported 00:06:51.418 Boot Partition: Not Supported 00:06:51.418 Memory Page Size Minimum: 4096 bytes 00:06:51.418 Memory Page Size Maximum: 65536 bytes 00:06:51.418 Persistent Memory Region: Not Supported 00:06:51.418 Optional Asynchronous Events Supported 00:06:51.418 Namespace Attribute Notices: Supported 00:06:51.418 Firmware Activation Notices: Not Supported 00:06:51.418 ANA Change Notices: Not Supported 00:06:51.418 PLE Aggregate Log Change Notices: Not Supported 00:06:51.418 LBA Status Info Alert Notices: Not Supported 00:06:51.418 EGE Aggregate Log Change Notices: Not Supported 00:06:51.418 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.418 Zone Descriptor Change Notices: Not Supported 00:06:51.418 Discovery Log Change Notices: Not Supported 00:06:51.418 Controller Attributes 00:06:51.418 128-bit Host Identifier: Not Supported 00:06:51.418 Non-Operational Permissive Mode: Not Supported 00:06:51.418 NVM Sets: Not Supported 00:06:51.418 Read Recovery Levels: Not Supported 00:06:51.418 Endurance Groups: Not Supported 00:06:51.418 Predictable Latency Mode: Not Supported 00:06:51.418 Traffic Based Keep ALive: Not Supported 00:06:51.418 Namespace Granularity: Not Supported 00:06:51.418 SQ Associations: Not Supported 00:06:51.418 UUID List: Not Supported 00:06:51.418 Multi-Domain Subsystem: Not Supported 00:06:51.418 Fixed Capacity Management: Not Supported 00:06:51.418 Variable Capacity Management: Not Supported 00:06:51.418 Delete Endurance Group: Not Supported 00:06:51.418 Delete NVM Set: Not Supported 00:06:51.418 Extended LBA Formats Supported: Supported 00:06:51.418 Flexible Data Placement Supported: Not Supported 00:06:51.418 00:06:51.418 Controller Memory Buffer Support 00:06:51.418 ================================ 00:06:51.418 Supported: No 00:06:51.418 00:06:51.419 Persistent Memory Region Support 00:06:51.419 ================================ 00:06:51.419 Supported: No 00:06:51.419 00:06:51.419 Admin Command Set Attributes 00:06:51.419 ============================ 00:06:51.419 Security Send/Receive: Not Supported 00:06:51.419 Format NVM: Supported 00:06:51.419 Firmware Activate/Download: Not Supported 00:06:51.419 Namespace Management: Supported 00:06:51.419 Device Self-Test: Not Supported 00:06:51.419 Directives: Supported 00:06:51.419 NVMe-MI: Not Supported 00:06:51.419 Virtualization Management: Not Supported 00:06:51.419 Doorbell Buffer Config: Supported 00:06:51.419 Get LBA Status Capability: Not Supported 00:06:51.419 Command & Feature Lockdown Capability: Not Supported 00:06:51.419 Abort Command Limit: 4 00:06:51.419 Async Event Request Limit: 4 00:06:51.419 Number of Firmware Slots: N/A 00:06:51.419 Firmware Slot 1 Read-Only: N/A 00:06:51.419 Firmware Activation Without Reset: N/A 00:06:51.419 Multiple Update Detection Support: N/A 00:06:51.419 Firmware Update Granularity: No Information Provided 00:06:51.419 Per-Namespace SMART Log: Yes 00:06:51.419 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.419 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:51.419 Command Effects Log Page: Supported 00:06:51.419 Get Log Page Extended Data: Supported 00:06:51.419 Telemetry Log Pages: Not Supported 00:06:51.419 Persistent Event Log Pages: Not Supported 00:06:51.419 Supported Log Pages Log Page: May Support 00:06:51.419 Commands Supported & Effects Log Page: Not Supported 00:06:51.419 Feature Identifiers & Effects Log Page:May Support 00:06:51.419 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.419 Data Area 4 for Telemetry Log: Not Supported 00:06:51.419 Error Log Page Entries Supported: 1 00:06:51.419 Keep Alive: Not Supported 00:06:51.419 00:06:51.419 NVM Command Set Attributes 00:06:51.419 ========================== 00:06:51.419 Submission Queue Entry Size 00:06:51.419 Max: 64 00:06:51.419 Min: 64 00:06:51.419 Completion Queue Entry Size 00:06:51.419 Max: 16 00:06:51.419 Min: 16 00:06:51.419 Number of Namespaces: 256 00:06:51.419 Compare Command: Supported 00:06:51.419 Write Uncorrectable Command: Not Supported 00:06:51.419 Dataset Management Command: Supported 00:06:51.419 Write Zeroes Command: Supported 00:06:51.419 Set Features Save Field: Supported 00:06:51.419 Reservations: Not Supported 00:06:51.419 Timestamp: Supported 00:06:51.419 Copy: Supported 00:06:51.419 Volatile Write Cache: Present 00:06:51.419 Atomic Write Unit (Normal): 1 00:06:51.419 Atomic Write Unit (PFail): 1 00:06:51.419 Atomic Compare & Write Unit: 1 00:06:51.419 Fused Compare & Write: Not Supported 00:06:51.419 Scatter-Gather List 00:06:51.419 SGL Command Set: Supported 00:06:51.419 SGL Keyed: Not Supported 00:06:51.419 SGL Bit Bucket Descriptor: Not Supported 00:06:51.419 SGL Metadata Pointer: Not Supported 00:06:51.419 Oversized SGL: Not Supported 00:06:51.419 SGL Metadata Address: Not Supported 00:06:51.419 SGL Offset: Not Supported 00:06:51.419 Transport SGL Data Block: Not Supported 00:06:51.419 Replay Protected Memory Block: Not Supported 00:06:51.419 00:06:51.419 Firmware Slot Information 00:06:51.419 ========================= 00:06:51.419 Active slot: 1 00:06:51.419 Slot 1 Firmware Revision: 1.0 00:06:51.419 00:06:51.419 00:06:51.419 Commands Supported and Effects 00:06:51.419 ============================== 00:06:51.419 Admin Commands 00:06:51.419 -------------- 00:06:51.419 Delete I/O Submission Queue (00h): Supported 00:06:51.419 Create I/O Submission Queue (01h): Supported 00:06:51.419 Get Log Page (02h): Supported 00:06:51.419 Delete I/O Completion Queue (04h): Supported 00:06:51.419 Create I/O Completion Queue (05h): Supported 00:06:51.419 Identify (06h): Supported 00:06:51.419 Abort (08h): Supported 00:06:51.419 Set Features (09h): Supported 00:06:51.419 Get Features (0Ah): Supported 00:06:51.419 Asynchronous Event Request (0Ch): Supported 00:06:51.419 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.419 Directive Send (19h): Supported 00:06:51.419 Directive Receive (1Ah): Supported 00:06:51.419 Virtualization Management (1Ch): Supported 00:06:51.419 Doorbell Buffer Config (7Ch): Supported 00:06:51.419 Format NVM (80h): Supported LBA-Change 00:06:51.419 I/O Commands 00:06:51.419 ------------ 00:06:51.419 Flush (00h): Supported LBA-Change 00:06:51.419 Write (01h): Supported LBA-Change 00:06:51.419 Read (02h): Supported 00:06:51.419 Compare (05h): Supported 00:06:51.419 Write Zeroes (08h): Supported LBA-Change 00:06:51.419 Dataset Management (09h): Supported LBA-Change 00:06:51.419 Unknown (0Ch): Supported 00:06:51.419 Unknown (12h): Supported 00:06:51.419 Copy (19h): Supported LBA-Change 00:06:51.419 Unknown (1Dh): Supported LBA-Change 00:06:51.419 00:06:51.419 Error Log 00:06:51.419 ========= 00:06:51.419 00:06:51.419 Arbitration 00:06:51.419 =========== 00:06:51.419 Arbitration Burst: no limit 00:06:51.419 00:06:51.419 Power Management 00:06:51.419 ================ 00:06:51.419 Number of Power States: 1 00:06:51.419 Current Power State: Power State #0 00:06:51.419 Power State #0: 00:06:51.419 Max Power: 25.00 W 00:06:51.419 Non-Operational State: Operational 00:06:51.419 Entry Latency: 16 microseconds 00:06:51.419 Exit Latency: 4 microseconds 00:06:51.419 Relative Read Throughput: 0 00:06:51.419 Relative Read Latency: 0 00:06:51.419 Relative Write Throughput: 0 00:06:51.419 Relative Write Latency: 0 00:06:51.419 Idle Power: Not Reported 00:06:51.419 Active Power: Not Reported 00:06:51.419 Non-Operational Permissive Mode: Not Supported 00:06:51.419 00:06:51.419 Health Information 00:06:51.419 ================== 00:06:51.419 Critical Warnings: 00:06:51.419 Available Spare Space: OK 00:06:51.419 Temperature: OK 00:06:51.419 Device Reliability: OK 00:06:51.419 Read Only: No 00:06:51.419 Volatile Memory Backup: OK 00:06:51.419 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.419 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.419 Available Spare: 0% 00:06:51.419 Available Spare Threshold: 0% 00:06:51.419 Life Percentage Used: 0% 00:06:51.419 Data Units Read: 1043 00:06:51.419 Data Units Written: 904 00:06:51.419 Host Read Commands: 56283 00:06:51.419 Host Write Commands: 54953 00:06:51.419 Controller Busy Time: 0 minutes 00:06:51.419 Power Cycles: 0 00:06:51.419 Power On Hours: 0 hours 00:06:51.419 Unsafe Shutdowns: 0 00:06:51.419 Unrecoverable Media Errors: 0 00:06:51.419 Lifetime Error Log Entries: 0 00:06:51.419 Warning Temperature Time: 0 minutes 00:06:51.419 Critical Temperature Time: 0 minutes 00:06:51.419 00:06:51.419 Number of Queues 00:06:51.419 ================ 00:06:51.419 Number of I/O Submission Queues: 64 00:06:51.419 Number of I/O Completion Queues: 64 00:06:51.419 00:06:51.419 ZNS Specific Controller Data 00:06:51.419 ============================ 00:06:51.419 Zone Append Size Limit: 0 00:06:51.419 00:06:51.419 00:06:51.419 Active Namespaces 00:06:51.419 ================= 00:06:51.419 Namespace ID:1 00:06:51.419 Error Recovery Timeout: Unlimited 00:06:51.419 Command Set Identifier: NVM (00h) 00:06:51.419 Deallocate: Supported 00:06:51.419 Deallocated/Unwritten Error: Supported 00:06:51.419 Deallocated Read Value: All 0x00 00:06:51.419 Deallocate in Write Zeroes: Not Supported 00:06:51.419 Deallocated Guard Field: 0xFFFF 00:06:51.419 Flush: Supported 00:06:51.419 Reservation: Not Supported 00:06:51.419 Namespace Sharing Capabilities: Private 00:06:51.419 Size (in LBAs): 1310720 (5GiB) 00:06:51.419 Capacity (in LBAs): 1310720 (5GiB) 00:06:51.419 Utilization (in LBAs): 1310720 (5GiB) 00:06:51.419 Thin Provisioning: Not Supported 00:06:51.419 Per-NS Atomic Units: No 00:06:51.419 Maximum Single Source Range Length: 128 00:06:51.419 Maximum Copy Length: 128 00:06:51.419 Maximum Source Range Count: 128 00:06:51.419 NGUID/EUI64 Never Reused: No 00:06:51.419 Namespace Write Protected: No 00:06:51.419 Number of LBA Formats: 8 00:06:51.419 Current LBA Format: LBA Format #04 00:06:51.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.419 00:06:51.419 NVM Specific Namespace Data 00:06:51.419 =========================== 00:06:51.420 Logical Block Storage Tag Mask: 0 00:06:51.420 Protection Information Capabilities: 00:06:51.420 16b Guard Protection Information Storage Tag Support: No 00:06:51.420 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.420 Storage Tag Check Read Support: No 00:06:51.420 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.420 ===================================================== 00:06:51.420 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:51.420 ===================================================== 00:06:51.420 Controller Capabilities/Features 00:06:51.420 ================================ 00:06:51.420 Vendor ID: 1b36 00:06:51.420 Subsystem Vendor ID: 1af4 00:06:51.420 Serial Number: 12343 00:06:51.420 Model Number: QEMU NVMe Ctrl 00:06:51.420 Firmware Version: 8.0.0 00:06:51.420 Recommended Arb Burst: 6 00:06:51.420 IEEE OUI Identifier: 00 54 52 00:06:51.420 Multi-path I/O 00:06:51.420 May have multiple subsystem ports: No 00:06:51.420 May have multiple controllers: Yes 00:06:51.420 Associated with SR-IOV VF: No 00:06:51.420 Max Data Transfer Size: 524288 00:06:51.420 Max Number of Namespaces: 256 00:06:51.420 Max Number of I/O Queues: 64 00:06:51.420 NVMe Specification Version (VS): 1.4 00:06:51.420 NVMe Specification Version (Identify): 1.4 00:06:51.420 Maximum Queue Entries: 2048 00:06:51.420 Contiguous Queues Required: Yes 00:06:51.420 Arbitration Mechanisms Supported 00:06:51.420 Weighted Round Robin: Not Supported 00:06:51.420 Vendor Specific: Not Supported 00:06:51.420 Reset Timeout: 7500 ms 00:06:51.420 Doorbell Stride: 4 bytes 00:06:51.420 NVM Subsystem Reset: Not Supported 00:06:51.420 Command Sets Supported 00:06:51.420 NVM Command Set: Supported 00:06:51.420 Boot Partition: Not Supported 00:06:51.420 Memory Page Size Minimum: 4096 bytes 00:06:51.420 Memory Page Size Maximum: 65536 bytes 00:06:51.420 Persistent Memory Region: Not Supported 00:06:51.420 Optional Asynchronous Events Supported 00:06:51.420 Namespace Attribute Notices: Supported 00:06:51.420 Firmware Activation Notices: Not Supported 00:06:51.420 ANA Change Notices: Not Supported 00:06:51.420 PLE Aggregate Log Change Notices: Not Supported 00:06:51.420 LBA Status Info Alert Notices: Not Supported 00:06:51.420 EGE Aggregate Log Change Notices: Not Supported 00:06:51.420 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.420 Zone Descriptor Change Notices: Not Supported 00:06:51.420 Discovery Log Change Notices: Not Supported 00:06:51.420 Controller Attributes 00:06:51.420 128-bit Host Identifier: Not Supported 00:06:51.420 Non-Operational Permissive Mode: Not Supported 00:06:51.420 NVM Sets: Not Supported 00:06:51.420 Read Recovery Levels: Not Supported 00:06:51.420 Endurance Groups: Supported 00:06:51.420 Predictable Latency Mode: Not Supported 00:06:51.420 Traffic Based Keep ALive: Not Supported 00:06:51.420 Namespace Granularity: Not Supported 00:06:51.420 SQ Associations: Not Supported 00:06:51.420 UUID List: Not Supported 00:06:51.420 Multi-Domain Subsystem: Not Supported 00:06:51.420 Fixed Capacity Management: Not Supported 00:06:51.420 Variable Capacity Management: Not Supported 00:06:51.420 Delete Endurance Group: Not Supported 00:06:51.420 Delete NVM Set: Not Supported 00:06:51.420 Extended LBA Formats Supported: Supported 00:06:51.420 Flexible Data Placement Supported: Supported 00:06:51.420 00:06:51.420 Controller Memory Buffer Support 00:06:51.420 ================================ 00:06:51.420 Supported: No 00:06:51.420 00:06:51.420 Persistent Memory Region Support 00:06:51.420 ================================ 00:06:51.420 Supported: No 00:06:51.420 00:06:51.420 Admin Command Set Attributes 00:06:51.420 ============================ 00:06:51.420 Security Send/Receive: Not Supported 00:06:51.420 Format NVM: Supported 00:06:51.420 Firmware Activate/Download: Not Supported 00:06:51.420 Namespace Management: Supported 00:06:51.420 Device Self-Test: Not Supported 00:06:51.420 Directives: Supported 00:06:51.420 NVMe-MI: Not Supported 00:06:51.420 Virtualization Management: Not Supported 00:06:51.420 Doorbell Buffer Config: Supported 00:06:51.420 Get LBA Status Capability: Not Supported 00:06:51.420 Command & Feature Lockdown Capability: Not Supported 00:06:51.420 Abort Command Limit: 4 00:06:51.420 Async Event Request Limit: 4 00:06:51.420 Number of Firmware Slots: N/A 00:06:51.420 Firmware Slot 1 Read-Only: N/A 00:06:51.420 Firmware Activation Without Reset: N/A 00:06:51.420 Multiple Update Detection Support: N/A 00:06:51.420 Firmware Update Granularity: No Information Provided 00:06:51.420 Per-Namespace SMART Log: Yes 00:06:51.420 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.420 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:51.420 Command Effects Log Page: Supported 00:06:51.420 Get Log Page Extended Data: Supported 00:06:51.420 Telemetry Log Pages: Not Supported 00:06:51.420 Persistent Event Log Pages: Not Supported 00:06:51.420 Supported Log Pages Log Page: May Support 00:06:51.420 Commands Supported & Effects Log Page: Not Supported 00:06:51.420 Feature Identifiers & Effects Log Page:May Support 00:06:51.420 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.420 Data Area 4 for Telemetry Log: Not Supported 00:06:51.420 Error Log Page Entries Supported: 1 00:06:51.420 Keep Alive: Not Supported 00:06:51.420 00:06:51.420 NVM Command Set Attributes 00:06:51.420 ========================== 00:06:51.420 Submission Queue Entry Size 00:06:51.420 Max: 64 00:06:51.420 Min: 64 00:06:51.420 Completion Queue Entry Size 00:06:51.420 Max: 16 00:06:51.420 Min: 16 00:06:51.420 Number of Namespaces: 256 00:06:51.420 Compare Command: Supported 00:06:51.420 Write Uncorrectable Command: Not Supported 00:06:51.420 Dataset Management Command: Supported 00:06:51.420 Write Zeroes Command: Supported 00:06:51.420 Set Features Save Field: Supported 00:06:51.420 Reservations: Not Supported 00:06:51.420 Timestamp: Supported 00:06:51.420 Copy: Supported 00:06:51.420 Volatile Write Cache: Present 00:06:51.420 Atomic Write Unit (Normal): 1 00:06:51.420 Atomic Write Unit (PFail): 1 00:06:51.420 Atomic Compare & Write Unit: 1 00:06:51.420 Fused Compare & Write: Not Supported 00:06:51.420 Scatter-Gather List 00:06:51.420 SGL Command Set: Supported 00:06:51.420 SGL Keyed: Not Supported 00:06:51.420 SGL Bit Bucket Descriptor: Not Supported 00:06:51.420 SGL Metadata Pointer: Not Supported 00:06:51.420 Oversized SGL: Not Supported 00:06:51.420 SGL Metadata Address: Not Supported 00:06:51.420 SGL Offset: Not Supported 00:06:51.420 Transport SGL Data Block: Not Supported 00:06:51.420 Replay Protected Memory Block: Not Supported 00:06:51.420 00:06:51.420 Firmware Slot Information 00:06:51.420 ========================= 00:06:51.420 Active slot: 1 00:06:51.420 Slot 1 Firmware Revision: 1.0 00:06:51.420 00:06:51.420 00:06:51.420 Commands Supported and Effects 00:06:51.420 ============================== 00:06:51.420 Admin Commands 00:06:51.420 -------------- 00:06:51.420 Delete I/O Submission Queue (00h): Supported 00:06:51.420 Create I/O Submission Queue (01h): Supported 00:06:51.420 Get Log Page (02h): Supported 00:06:51.420 Delete I/O Completion Queue (04h): Supported 00:06:51.420 Create I/O Completion Queue (05h): Supported 00:06:51.420 Identify (06h): Supported 00:06:51.420 Abort (08h): Supported 00:06:51.420 Set Features (09h): Supported 00:06:51.420 Get Features (0Ah): Supported 00:06:51.420 Asynchronous Event Request (0Ch): Supported 00:06:51.420 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.420 Directive Send (19h): Supported 00:06:51.420 Directive Receive (1Ah): Supported 00:06:51.420 Virtualization Management (1Ch): Supported 00:06:51.420 Doorbell Buffer Config (7Ch): Supported 00:06:51.420 Format NVM (80h): Supported LBA-Change 00:06:51.420 I/O Commands 00:06:51.420 ------------ 00:06:51.420 Flush (00h): Supported LBA-Change 00:06:51.420 Write (01h): Supported LBA-Change 00:06:51.420 Read (02h): Supported 00:06:51.420 Compare (05h): Supported 00:06:51.420 Write Zeroes (08h): Supported LBA-Change 00:06:51.420 Dataset Management (09h): Supported LBA-Change 00:06:51.420 Unknown (0Ch): Supported 00:06:51.420 Unknown (12h): Supported 00:06:51.421 Copy (19h): Supported LBA-Change 00:06:51.421 Unknown (1Dh): Supported LBA-Change 00:06:51.421 00:06:51.421 Error Log 00:06:51.421 ========= 00:06:51.421 00:06:51.421 Arbitration 00:06:51.421 =========== 00:06:51.421 Arbitration Burst: no limit 00:06:51.421 00:06:51.421 Power Management 00:06:51.421 ================ 00:06:51.421 Number of Power States: 1 00:06:51.421 Current Power State: Power State #0 00:06:51.421 Power State #0: 00:06:51.421 Max Power: 25.00 W 00:06:51.421 Non-Operational State: Operational 00:06:51.421 Entry Latency: 16 microseconds 00:06:51.421 Exit Latency: 4 microseconds 00:06:51.421 Relative Read Throughput: 0 00:06:51.421 Relative Read Latency: 0 00:06:51.421 Relative Write Throughput: 0 00:06:51.421 Relative Write Latency: 0 00:06:51.421 Idle Power: Not Reported 00:06:51.421 Active Power: Not Reported 00:06:51.421 Non-Operational Permissive Mode: Not Supported 00:06:51.421 00:06:51.421 Health Information 00:06:51.421 ================== 00:06:51.421 Critical Warnings: 00:06:51.421 Available Spare Space: OK 00:06:51.421 Temperature: OK 00:06:51.421 Device Reliability: OK 00:06:51.421 Read Only: No 00:06:51.421 Volatile Memory Backup: OK 00:06:51.421 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.421 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.421 Available Spare: 0% 00:06:51.421 Available Spare Threshold: 0% 00:06:51.421 Life Percentage Used: [2024-11-20 06:05:11.001809] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62922 terminated unexpected 00:06:51.421 [2024-11-20 06:05:11.002644] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62922 terminated unexpected 00:06:51.421 [2024-11-20 06:05:11.003228] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62922 terminated unexpected 00:06:51.421 [2024-11-20 06:05:11.004157] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62922 terminated unexpected 00:06:51.421 0% 00:06:51.421 Data Units Read: 839 00:06:51.421 Data Units Written: 768 00:06:51.421 Host Read Commands: 40174 00:06:51.421 Host Write Commands: 39597 00:06:51.421 Controller Busy Time: 0 minutes 00:06:51.421 Power Cycles: 0 00:06:51.421 Power On Hours: 0 hours 00:06:51.421 Unsafe Shutdowns: 0 00:06:51.421 Unrecoverable Media Errors: 0 00:06:51.421 Lifetime Error Log Entries: 0 00:06:51.421 Warning Temperature Time: 0 minutes 00:06:51.421 Critical Temperature Time: 0 minutes 00:06:51.421 00:06:51.421 Number of Queues 00:06:51.421 ================ 00:06:51.421 Number of I/O Submission Queues: 64 00:06:51.421 Number of I/O Completion Queues: 64 00:06:51.421 00:06:51.421 ZNS Specific Controller Data 00:06:51.421 ============================ 00:06:51.421 Zone Append Size Limit: 0 00:06:51.421 00:06:51.421 00:06:51.421 Active Namespaces 00:06:51.421 ================= 00:06:51.421 Namespace ID:1 00:06:51.421 Error Recovery Timeout: Unlimited 00:06:51.421 Command Set Identifier: NVM (00h) 00:06:51.421 Deallocate: Supported 00:06:51.421 Deallocated/Unwritten Error: Supported 00:06:51.421 Deallocated Read Value: All 0x00 00:06:51.421 Deallocate in Write Zeroes: Not Supported 00:06:51.421 Deallocated Guard Field: 0xFFFF 00:06:51.421 Flush: Supported 00:06:51.421 Reservation: Not Supported 00:06:51.421 Namespace Sharing Capabilities: Multiple Controllers 00:06:51.421 Size (in LBAs): 262144 (1GiB) 00:06:51.421 Capacity (in LBAs): 262144 (1GiB) 00:06:51.421 Utilization (in LBAs): 262144 (1GiB) 00:06:51.421 Thin Provisioning: Not Supported 00:06:51.421 Per-NS Atomic Units: No 00:06:51.421 Maximum Single Source Range Length: 128 00:06:51.421 Maximum Copy Length: 128 00:06:51.421 Maximum Source Range Count: 128 00:06:51.421 NGUID/EUI64 Never Reused: No 00:06:51.421 Namespace Write Protected: No 00:06:51.421 Endurance group ID: 1 00:06:51.421 Number of LBA Formats: 8 00:06:51.421 Current LBA Format: LBA Format #04 00:06:51.421 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.421 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.421 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.421 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.421 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.421 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.421 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.421 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.421 00:06:51.421 Get Feature FDP: 00:06:51.421 ================ 00:06:51.421 Enabled: Yes 00:06:51.421 FDP configuration index: 0 00:06:51.421 00:06:51.421 FDP configurations log page 00:06:51.421 =========================== 00:06:51.421 Number of FDP configurations: 1 00:06:51.421 Version: 0 00:06:51.421 Size: 112 00:06:51.421 FDP Configuration Descriptor: 0 00:06:51.421 Descriptor Size: 96 00:06:51.421 Reclaim Group Identifier format: 2 00:06:51.421 FDP Volatile Write Cache: Not Present 00:06:51.421 FDP Configuration: Valid 00:06:51.421 Vendor Specific Size: 0 00:06:51.421 Number of Reclaim Groups: 2 00:06:51.421 Number of Recalim Unit Handles: 8 00:06:51.421 Max Placement Identifiers: 128 00:06:51.421 Number of Namespaces Suppprted: 256 00:06:51.421 Reclaim unit Nominal Size: 6000000 bytes 00:06:51.421 Estimated Reclaim Unit Time Limit: Not Reported 00:06:51.421 RUH Desc #000: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #001: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #002: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #003: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #004: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #005: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #006: RUH Type: Initially Isolated 00:06:51.421 RUH Desc #007: RUH Type: Initially Isolated 00:06:51.421 00:06:51.421 FDP reclaim unit handle usage log page 00:06:51.421 ====================================== 00:06:51.421 Number of Reclaim Unit Handles: 8 00:06:51.421 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:51.421 RUH Usage Desc #001: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #002: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #003: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #004: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #005: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #006: RUH Attributes: Unused 00:06:51.421 RUH Usage Desc #007: RUH Attributes: Unused 00:06:51.421 00:06:51.421 FDP statistics log page 00:06:51.421 ======================= 00:06:51.421 Host bytes with metadata written: 478060544 00:06:51.421 Media bytes with metadata written: 478113792 00:06:51.421 Media bytes erased: 0 00:06:51.421 00:06:51.421 FDP events log page 00:06:51.421 =================== 00:06:51.421 Number of FDP events: 0 00:06:51.421 00:06:51.421 NVM Specific Namespace Data 00:06:51.421 =========================== 00:06:51.421 Logical Block Storage Tag Mask: 0 00:06:51.421 Protection Information Capabilities: 00:06:51.421 16b Guard Protection Information Storage Tag Support: No 00:06:51.421 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.421 Storage Tag Check Read Support: No 00:06:51.421 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.421 ===================================================== 00:06:51.421 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:51.421 ===================================================== 00:06:51.421 Controller Capabilities/Features 00:06:51.421 ================================ 00:06:51.421 Vendor ID: 1b36 00:06:51.421 Subsystem Vendor ID: 1af4 00:06:51.421 Serial Number: 12342 00:06:51.421 Model Number: QEMU NVMe Ctrl 00:06:51.421 Firmware Version: 8.0.0 00:06:51.421 Recommended Arb Burst: 6 00:06:51.421 IEEE OUI Identifier: 00 54 52 00:06:51.421 Multi-path I/O 00:06:51.421 May have multiple subsystem ports: No 00:06:51.421 May have multiple controllers: No 00:06:51.421 Associated with SR-IOV VF: No 00:06:51.421 Max Data Transfer Size: 524288 00:06:51.421 Max Number of Namespaces: 256 00:06:51.421 Max Number of I/O Queues: 64 00:06:51.422 NVMe Specification Version (VS): 1.4 00:06:51.422 NVMe Specification Version (Identify): 1.4 00:06:51.422 Maximum Queue Entries: 2048 00:06:51.422 Contiguous Queues Required: Yes 00:06:51.422 Arbitration Mechanisms Supported 00:06:51.422 Weighted Round Robin: Not Supported 00:06:51.422 Vendor Specific: Not Supported 00:06:51.422 Reset Timeout: 7500 ms 00:06:51.422 Doorbell Stride: 4 bytes 00:06:51.422 NVM Subsystem Reset: Not Supported 00:06:51.422 Command Sets Supported 00:06:51.422 NVM Command Set: Supported 00:06:51.422 Boot Partition: Not Supported 00:06:51.422 Memory Page Size Minimum: 4096 bytes 00:06:51.422 Memory Page Size Maximum: 65536 bytes 00:06:51.422 Persistent Memory Region: Not Supported 00:06:51.422 Optional Asynchronous Events Supported 00:06:51.422 Namespace Attribute Notices: Supported 00:06:51.422 Firmware Activation Notices: Not Supported 00:06:51.422 ANA Change Notices: Not Supported 00:06:51.422 PLE Aggregate Log Change Notices: Not Supported 00:06:51.422 LBA Status Info Alert Notices: Not Supported 00:06:51.422 EGE Aggregate Log Change Notices: Not Supported 00:06:51.422 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.422 Zone Descriptor Change Notices: Not Supported 00:06:51.422 Discovery Log Change Notices: Not Supported 00:06:51.422 Controller Attributes 00:06:51.422 128-bit Host Identifier: Not Supported 00:06:51.422 Non-Operational Permissive Mode: Not Supported 00:06:51.422 NVM Sets: Not Supported 00:06:51.422 Read Recovery Levels: Not Supported 00:06:51.422 Endurance Groups: Not Supported 00:06:51.422 Predictable Latency Mode: Not Supported 00:06:51.422 Traffic Based Keep ALive: Not Supported 00:06:51.422 Namespace Granularity: Not Supported 00:06:51.422 SQ Associations: Not Supported 00:06:51.422 UUID List: Not Supported 00:06:51.422 Multi-Domain Subsystem: Not Supported 00:06:51.422 Fixed Capacity Management: Not Supported 00:06:51.422 Variable Capacity Management: Not Supported 00:06:51.422 Delete Endurance Group: Not Supported 00:06:51.422 Delete NVM Set: Not Supported 00:06:51.422 Extended LBA Formats Supported: Supported 00:06:51.422 Flexible Data Placement Supported: Not Supported 00:06:51.422 00:06:51.422 Controller Memory Buffer Support 00:06:51.422 ================================ 00:06:51.422 Supported: No 00:06:51.422 00:06:51.422 Persistent Memory Region Support 00:06:51.422 ================================ 00:06:51.422 Supported: No 00:06:51.422 00:06:51.422 Admin Command Set Attributes 00:06:51.422 ============================ 00:06:51.422 Security Send/Receive: Not Supported 00:06:51.422 Format NVM: Supported 00:06:51.422 Firmware Activate/Download: Not Supported 00:06:51.422 Namespace Management: Supported 00:06:51.422 Device Self-Test: Not Supported 00:06:51.422 Directives: Supported 00:06:51.422 NVMe-MI: Not Supported 00:06:51.422 Virtualization Management: Not Supported 00:06:51.422 Doorbell Buffer Config: Supported 00:06:51.422 Get LBA Status Capability: Not Supported 00:06:51.422 Command & Feature Lockdown Capability: Not Supported 00:06:51.422 Abort Command Limit: 4 00:06:51.422 Async Event Request Limit: 4 00:06:51.422 Number of Firmware Slots: N/A 00:06:51.422 Firmware Slot 1 Read-Only: N/A 00:06:51.422 Firmware Activation Without Reset: N/A 00:06:51.422 Multiple Update Detection Support: N/A 00:06:51.422 Firmware Update Granularity: No Information Provided 00:06:51.422 Per-Namespace SMART Log: Yes 00:06:51.422 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.422 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:51.422 Command Effects Log Page: Supported 00:06:51.422 Get Log Page Extended Data: Supported 00:06:51.422 Telemetry Log Pages: Not Supported 00:06:51.422 Persistent Event Log Pages: Not Supported 00:06:51.422 Supported Log Pages Log Page: May Support 00:06:51.422 Commands Supported & Effects Log Page: Not Supported 00:06:51.422 Feature Identifiers & Effects Log Page:May Support 00:06:51.422 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.422 Data Area 4 for Telemetry Log: Not Supported 00:06:51.422 Error Log Page Entries Supported: 1 00:06:51.422 Keep Alive: Not Supported 00:06:51.422 00:06:51.422 NVM Command Set Attributes 00:06:51.422 ========================== 00:06:51.422 Submission Queue Entry Size 00:06:51.422 Max: 64 00:06:51.422 Min: 64 00:06:51.422 Completion Queue Entry Size 00:06:51.422 Max: 16 00:06:51.422 Min: 16 00:06:51.422 Number of Namespaces: 256 00:06:51.422 Compare Command: Supported 00:06:51.422 Write Uncorrectable Command: Not Supported 00:06:51.422 Dataset Management Command: Supported 00:06:51.422 Write Zeroes Command: Supported 00:06:51.422 Set Features Save Field: Supported 00:06:51.422 Reservations: Not Supported 00:06:51.422 Timestamp: Supported 00:06:51.422 Copy: Supported 00:06:51.422 Volatile Write Cache: Present 00:06:51.422 Atomic Write Unit (Normal): 1 00:06:51.422 Atomic Write Unit (PFail): 1 00:06:51.422 Atomic Compare & Write Unit: 1 00:06:51.422 Fused Compare & Write: Not Supported 00:06:51.422 Scatter-Gather List 00:06:51.422 SGL Command Set: Supported 00:06:51.422 SGL Keyed: Not Supported 00:06:51.422 SGL Bit Bucket Descriptor: Not Supported 00:06:51.422 SGL Metadata Pointer: Not Supported 00:06:51.422 Oversized SGL: Not Supported 00:06:51.422 SGL Metadata Address: Not Supported 00:06:51.422 SGL Offset: Not Supported 00:06:51.422 Transport SGL Data Block: Not Supported 00:06:51.422 Replay Protected Memory Block: Not Supported 00:06:51.422 00:06:51.422 Firmware Slot Information 00:06:51.422 ========================= 00:06:51.422 Active slot: 1 00:06:51.422 Slot 1 Firmware Revision: 1.0 00:06:51.422 00:06:51.422 00:06:51.422 Commands Supported and Effects 00:06:51.422 ============================== 00:06:51.422 Admin Commands 00:06:51.422 -------------- 00:06:51.422 Delete I/O Submission Queue (00h): Supported 00:06:51.422 Create I/O Submission Queue (01h): Supported 00:06:51.422 Get Log Page (02h): Supported 00:06:51.422 Delete I/O Completion Queue (04h): Supported 00:06:51.422 Create I/O Completion Queue (05h): Supported 00:06:51.422 Identify (06h): Supported 00:06:51.422 Abort (08h): Supported 00:06:51.422 Set Features (09h): Supported 00:06:51.422 Get Features (0Ah): Supported 00:06:51.422 Asynchronous Event Request (0Ch): Supported 00:06:51.422 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.422 Directive Send (19h): Supported 00:06:51.422 Directive Receive (1Ah): Supported 00:06:51.422 Virtualization Management (1Ch): Supported 00:06:51.422 Doorbell Buffer Config (7Ch): Supported 00:06:51.422 Format NVM (80h): Supported LBA-Change 00:06:51.422 I/O Commands 00:06:51.422 ------------ 00:06:51.422 Flush (00h): Supported LBA-Change 00:06:51.422 Write (01h): Supported LBA-Change 00:06:51.422 Read (02h): Supported 00:06:51.422 Compare (05h): Supported 00:06:51.422 Write Zeroes (08h): Supported LBA-Change 00:06:51.422 Dataset Management (09h): Supported LBA-Change 00:06:51.422 Unknown (0Ch): Supported 00:06:51.422 Unknown (12h): Supported 00:06:51.422 Copy (19h): Supported LBA-Change 00:06:51.422 Unknown (1Dh): Supported LBA-Change 00:06:51.422 00:06:51.422 Error Log 00:06:51.422 ========= 00:06:51.422 00:06:51.422 Arbitration 00:06:51.422 =========== 00:06:51.422 Arbitration Burst: no limit 00:06:51.422 00:06:51.422 Power Management 00:06:51.422 ================ 00:06:51.422 Number of Power States: 1 00:06:51.422 Current Power State: Power State #0 00:06:51.423 Power State #0: 00:06:51.423 Max Power: 25.00 W 00:06:51.423 Non-Operational State: Operational 00:06:51.423 Entry Latency: 16 microseconds 00:06:51.423 Exit Latency: 4 microseconds 00:06:51.423 Relative Read Throughput: 0 00:06:51.423 Relative Read Latency: 0 00:06:51.423 Relative Write Throughput: 0 00:06:51.423 Relative Write Latency: 0 00:06:51.423 Idle Power: Not Reported 00:06:51.423 Active Power: Not Reported 00:06:51.423 Non-Operational Permissive Mode: Not Supported 00:06:51.423 00:06:51.423 Health Information 00:06:51.423 ================== 00:06:51.423 Critical Warnings: 00:06:51.423 Available Spare Space: OK 00:06:51.423 Temperature: OK 00:06:51.423 Device Reliability: OK 00:06:51.423 Read Only: No 00:06:51.423 Volatile Memory Backup: OK 00:06:51.423 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.423 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.423 Available Spare: 0% 00:06:51.423 Available Spare Threshold: 0% 00:06:51.423 Life Percentage Used: 0% 00:06:51.423 Data Units Read: 2246 00:06:51.423 Data Units Written: 2033 00:06:51.423 Host Read Commands: 118404 00:06:51.423 Host Write Commands: 116677 00:06:51.423 Controller Busy Time: 0 minutes 00:06:51.423 Power Cycles: 0 00:06:51.423 Power On Hours: 0 hours 00:06:51.423 Unsafe Shutdowns: 0 00:06:51.423 Unrecoverable Media Errors: 0 00:06:51.423 Lifetime Error Log Entries: 0 00:06:51.423 Warning Temperature Time: 0 minutes 00:06:51.423 Critical Temperature Time: 0 minutes 00:06:51.423 00:06:51.423 Number of Queues 00:06:51.423 ================ 00:06:51.423 Number of I/O Submission Queues: 64 00:06:51.423 Number of I/O Completion Queues: 64 00:06:51.423 00:06:51.423 ZNS Specific Controller Data 00:06:51.423 ============================ 00:06:51.423 Zone Append Size Limit: 0 00:06:51.423 00:06:51.423 00:06:51.423 Active Namespaces 00:06:51.423 ================= 00:06:51.423 Namespace ID:1 00:06:51.423 Error Recovery Timeout: Unlimited 00:06:51.423 Command Set Identifier: NVM (00h) 00:06:51.423 Deallocate: Supported 00:06:51.423 Deallocated/Unwritten Error: Supported 00:06:51.423 Deallocated Read Value: All 0x00 00:06:51.423 Deallocate in Write Zeroes: Not Supported 00:06:51.423 Deallocated Guard Field: 0xFFFF 00:06:51.423 Flush: Supported 00:06:51.423 Reservation: Not Supported 00:06:51.423 Namespace Sharing Capabilities: Private 00:06:51.423 Size (in LBAs): 1048576 (4GiB) 00:06:51.423 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.423 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.423 Thin Provisioning: Not Supported 00:06:51.423 Per-NS Atomic Units: No 00:06:51.423 Maximum Single Source Range Length: 128 00:06:51.423 Maximum Copy Length: 128 00:06:51.423 Maximum Source Range Count: 128 00:06:51.423 NGUID/EUI64 Never Reused: No 00:06:51.423 Namespace Write Protected: No 00:06:51.423 Number of LBA Formats: 8 00:06:51.423 Current LBA Format: LBA Format #04 00:06:51.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.423 00:06:51.423 NVM Specific Namespace Data 00:06:51.423 =========================== 00:06:51.423 Logical Block Storage Tag Mask: 0 00:06:51.423 Protection Information Capabilities: 00:06:51.423 16b Guard Protection Information Storage Tag Support: No 00:06:51.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.423 Storage Tag Check Read Support: No 00:06:51.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Namespace ID:2 00:06:51.423 Error Recovery Timeout: Unlimited 00:06:51.423 Command Set Identifier: NVM (00h) 00:06:51.423 Deallocate: Supported 00:06:51.423 Deallocated/Unwritten Error: Supported 00:06:51.423 Deallocated Read Value: All 0x00 00:06:51.423 Deallocate in Write Zeroes: Not Supported 00:06:51.423 Deallocated Guard Field: 0xFFFF 00:06:51.423 Flush: Supported 00:06:51.423 Reservation: Not Supported 00:06:51.423 Namespace Sharing Capabilities: Private 00:06:51.423 Size (in LBAs): 1048576 (4GiB) 00:06:51.423 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.423 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.423 Thin Provisioning: Not Supported 00:06:51.423 Per-NS Atomic Units: No 00:06:51.423 Maximum Single Source Range Length: 128 00:06:51.423 Maximum Copy Length: 128 00:06:51.423 Maximum Source Range Count: 128 00:06:51.423 NGUID/EUI64 Never Reused: No 00:06:51.423 Namespace Write Protected: No 00:06:51.423 Number of LBA Formats: 8 00:06:51.423 Current LBA Format: LBA Format #04 00:06:51.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.423 00:06:51.423 NVM Specific Namespace Data 00:06:51.423 =========================== 00:06:51.423 Logical Block Storage Tag Mask: 0 00:06:51.423 Protection Information Capabilities: 00:06:51.423 16b Guard Protection Information Storage Tag Support: No 00:06:51.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.423 Storage Tag Check Read Support: No 00:06:51.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.423 Namespace ID:3 00:06:51.423 Error Recovery Timeout: Unlimited 00:06:51.423 Command Set Identifier: NVM (00h) 00:06:51.423 Deallocate: Supported 00:06:51.423 Deallocated/Unwritten Error: Supported 00:06:51.423 Deallocated Read Value: All 0x00 00:06:51.423 Deallocate in Write Zeroes: Not Supported 00:06:51.423 Deallocated Guard Field: 0xFFFF 00:06:51.423 Flush: Supported 00:06:51.423 Reservation: Not Supported 00:06:51.423 Namespace Sharing Capabilities: Private 00:06:51.423 Size (in LBAs): 1048576 (4GiB) 00:06:51.423 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.423 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.423 Thin Provisioning: Not Supported 00:06:51.423 Per-NS Atomic Units: No 00:06:51.423 Maximum Single Source Range Length: 128 00:06:51.423 Maximum Copy Length: 128 00:06:51.423 Maximum Source Range Count: 128 00:06:51.423 NGUID/EUI64 Never Reused: No 00:06:51.423 Namespace Write Protected: No 00:06:51.423 Number of LBA Formats: 8 00:06:51.423 Current LBA Format: LBA Format #04 00:06:51.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.423 00:06:51.423 NVM Specific Namespace Data 00:06:51.423 =========================== 00:06:51.423 Logical Block Storage Tag Mask: 0 00:06:51.423 Protection Information Capabilities: 00:06:51.423 16b Guard Protection Information Storage Tag Support: No 00:06:51.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.423 Storage Tag Check Read Support: No 00:06:51.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.424 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:51.424 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:06:51.683 ===================================================== 00:06:51.683 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:51.683 ===================================================== 00:06:51.683 Controller Capabilities/Features 00:06:51.683 ================================ 00:06:51.684 Vendor ID: 1b36 00:06:51.684 Subsystem Vendor ID: 1af4 00:06:51.684 Serial Number: 12340 00:06:51.684 Model Number: QEMU NVMe Ctrl 00:06:51.684 Firmware Version: 8.0.0 00:06:51.684 Recommended Arb Burst: 6 00:06:51.684 IEEE OUI Identifier: 00 54 52 00:06:51.684 Multi-path I/O 00:06:51.684 May have multiple subsystem ports: No 00:06:51.684 May have multiple controllers: No 00:06:51.684 Associated with SR-IOV VF: No 00:06:51.684 Max Data Transfer Size: 524288 00:06:51.684 Max Number of Namespaces: 256 00:06:51.684 Max Number of I/O Queues: 64 00:06:51.684 NVMe Specification Version (VS): 1.4 00:06:51.684 NVMe Specification Version (Identify): 1.4 00:06:51.684 Maximum Queue Entries: 2048 00:06:51.684 Contiguous Queues Required: Yes 00:06:51.684 Arbitration Mechanisms Supported 00:06:51.684 Weighted Round Robin: Not Supported 00:06:51.684 Vendor Specific: Not Supported 00:06:51.684 Reset Timeout: 7500 ms 00:06:51.684 Doorbell Stride: 4 bytes 00:06:51.684 NVM Subsystem Reset: Not Supported 00:06:51.684 Command Sets Supported 00:06:51.684 NVM Command Set: Supported 00:06:51.684 Boot Partition: Not Supported 00:06:51.684 Memory Page Size Minimum: 4096 bytes 00:06:51.684 Memory Page Size Maximum: 65536 bytes 00:06:51.684 Persistent Memory Region: Not Supported 00:06:51.684 Optional Asynchronous Events Supported 00:06:51.684 Namespace Attribute Notices: Supported 00:06:51.684 Firmware Activation Notices: Not Supported 00:06:51.684 ANA Change Notices: Not Supported 00:06:51.684 PLE Aggregate Log Change Notices: Not Supported 00:06:51.684 LBA Status Info Alert Notices: Not Supported 00:06:51.684 EGE Aggregate Log Change Notices: Not Supported 00:06:51.684 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.684 Zone Descriptor Change Notices: Not Supported 00:06:51.684 Discovery Log Change Notices: Not Supported 00:06:51.684 Controller Attributes 00:06:51.684 128-bit Host Identifier: Not Supported 00:06:51.684 Non-Operational Permissive Mode: Not Supported 00:06:51.684 NVM Sets: Not Supported 00:06:51.684 Read Recovery Levels: Not Supported 00:06:51.684 Endurance Groups: Not Supported 00:06:51.684 Predictable Latency Mode: Not Supported 00:06:51.684 Traffic Based Keep ALive: Not Supported 00:06:51.684 Namespace Granularity: Not Supported 00:06:51.684 SQ Associations: Not Supported 00:06:51.684 UUID List: Not Supported 00:06:51.684 Multi-Domain Subsystem: Not Supported 00:06:51.684 Fixed Capacity Management: Not Supported 00:06:51.684 Variable Capacity Management: Not Supported 00:06:51.684 Delete Endurance Group: Not Supported 00:06:51.684 Delete NVM Set: Not Supported 00:06:51.684 Extended LBA Formats Supported: Supported 00:06:51.684 Flexible Data Placement Supported: Not Supported 00:06:51.684 00:06:51.684 Controller Memory Buffer Support 00:06:51.684 ================================ 00:06:51.684 Supported: No 00:06:51.684 00:06:51.684 Persistent Memory Region Support 00:06:51.684 ================================ 00:06:51.684 Supported: No 00:06:51.684 00:06:51.684 Admin Command Set Attributes 00:06:51.684 ============================ 00:06:51.684 Security Send/Receive: Not Supported 00:06:51.684 Format NVM: Supported 00:06:51.684 Firmware Activate/Download: Not Supported 00:06:51.684 Namespace Management: Supported 00:06:51.684 Device Self-Test: Not Supported 00:06:51.684 Directives: Supported 00:06:51.684 NVMe-MI: Not Supported 00:06:51.684 Virtualization Management: Not Supported 00:06:51.684 Doorbell Buffer Config: Supported 00:06:51.684 Get LBA Status Capability: Not Supported 00:06:51.684 Command & Feature Lockdown Capability: Not Supported 00:06:51.684 Abort Command Limit: 4 00:06:51.684 Async Event Request Limit: 4 00:06:51.684 Number of Firmware Slots: N/A 00:06:51.684 Firmware Slot 1 Read-Only: N/A 00:06:51.684 Firmware Activation Without Reset: N/A 00:06:51.684 Multiple Update Detection Support: N/A 00:06:51.684 Firmware Update Granularity: No Information Provided 00:06:51.684 Per-Namespace SMART Log: Yes 00:06:51.684 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.684 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:51.684 Command Effects Log Page: Supported 00:06:51.684 Get Log Page Extended Data: Supported 00:06:51.684 Telemetry Log Pages: Not Supported 00:06:51.684 Persistent Event Log Pages: Not Supported 00:06:51.684 Supported Log Pages Log Page: May Support 00:06:51.684 Commands Supported & Effects Log Page: Not Supported 00:06:51.684 Feature Identifiers & Effects Log Page:May Support 00:06:51.684 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.684 Data Area 4 for Telemetry Log: Not Supported 00:06:51.684 Error Log Page Entries Supported: 1 00:06:51.684 Keep Alive: Not Supported 00:06:51.684 00:06:51.684 NVM Command Set Attributes 00:06:51.684 ========================== 00:06:51.684 Submission Queue Entry Size 00:06:51.684 Max: 64 00:06:51.684 Min: 64 00:06:51.684 Completion Queue Entry Size 00:06:51.684 Max: 16 00:06:51.684 Min: 16 00:06:51.684 Number of Namespaces: 256 00:06:51.684 Compare Command: Supported 00:06:51.684 Write Uncorrectable Command: Not Supported 00:06:51.684 Dataset Management Command: Supported 00:06:51.684 Write Zeroes Command: Supported 00:06:51.684 Set Features Save Field: Supported 00:06:51.684 Reservations: Not Supported 00:06:51.684 Timestamp: Supported 00:06:51.684 Copy: Supported 00:06:51.684 Volatile Write Cache: Present 00:06:51.684 Atomic Write Unit (Normal): 1 00:06:51.684 Atomic Write Unit (PFail): 1 00:06:51.684 Atomic Compare & Write Unit: 1 00:06:51.684 Fused Compare & Write: Not Supported 00:06:51.684 Scatter-Gather List 00:06:51.684 SGL Command Set: Supported 00:06:51.684 SGL Keyed: Not Supported 00:06:51.684 SGL Bit Bucket Descriptor: Not Supported 00:06:51.684 SGL Metadata Pointer: Not Supported 00:06:51.684 Oversized SGL: Not Supported 00:06:51.684 SGL Metadata Address: Not Supported 00:06:51.684 SGL Offset: Not Supported 00:06:51.684 Transport SGL Data Block: Not Supported 00:06:51.684 Replay Protected Memory Block: Not Supported 00:06:51.684 00:06:51.684 Firmware Slot Information 00:06:51.684 ========================= 00:06:51.684 Active slot: 1 00:06:51.684 Slot 1 Firmware Revision: 1.0 00:06:51.684 00:06:51.684 00:06:51.684 Commands Supported and Effects 00:06:51.684 ============================== 00:06:51.684 Admin Commands 00:06:51.684 -------------- 00:06:51.684 Delete I/O Submission Queue (00h): Supported 00:06:51.684 Create I/O Submission Queue (01h): Supported 00:06:51.684 Get Log Page (02h): Supported 00:06:51.684 Delete I/O Completion Queue (04h): Supported 00:06:51.684 Create I/O Completion Queue (05h): Supported 00:06:51.684 Identify (06h): Supported 00:06:51.684 Abort (08h): Supported 00:06:51.684 Set Features (09h): Supported 00:06:51.684 Get Features (0Ah): Supported 00:06:51.684 Asynchronous Event Request (0Ch): Supported 00:06:51.684 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.684 Directive Send (19h): Supported 00:06:51.684 Directive Receive (1Ah): Supported 00:06:51.684 Virtualization Management (1Ch): Supported 00:06:51.684 Doorbell Buffer Config (7Ch): Supported 00:06:51.684 Format NVM (80h): Supported LBA-Change 00:06:51.684 I/O Commands 00:06:51.684 ------------ 00:06:51.684 Flush (00h): Supported LBA-Change 00:06:51.684 Write (01h): Supported LBA-Change 00:06:51.684 Read (02h): Supported 00:06:51.684 Compare (05h): Supported 00:06:51.684 Write Zeroes (08h): Supported LBA-Change 00:06:51.684 Dataset Management (09h): Supported LBA-Change 00:06:51.684 Unknown (0Ch): Supported 00:06:51.684 Unknown (12h): Supported 00:06:51.684 Copy (19h): Supported LBA-Change 00:06:51.684 Unknown (1Dh): Supported LBA-Change 00:06:51.684 00:06:51.684 Error Log 00:06:51.684 ========= 00:06:51.684 00:06:51.684 Arbitration 00:06:51.684 =========== 00:06:51.684 Arbitration Burst: no limit 00:06:51.684 00:06:51.684 Power Management 00:06:51.684 ================ 00:06:51.685 Number of Power States: 1 00:06:51.685 Current Power State: Power State #0 00:06:51.685 Power State #0: 00:06:51.685 Max Power: 25.00 W 00:06:51.685 Non-Operational State: Operational 00:06:51.685 Entry Latency: 16 microseconds 00:06:51.685 Exit Latency: 4 microseconds 00:06:51.685 Relative Read Throughput: 0 00:06:51.685 Relative Read Latency: 0 00:06:51.685 Relative Write Throughput: 0 00:06:51.685 Relative Write Latency: 0 00:06:51.685 Idle Power: Not Reported 00:06:51.685 Active Power: Not Reported 00:06:51.685 Non-Operational Permissive Mode: Not Supported 00:06:51.685 00:06:51.685 Health Information 00:06:51.685 ================== 00:06:51.685 Critical Warnings: 00:06:51.685 Available Spare Space: OK 00:06:51.685 Temperature: OK 00:06:51.685 Device Reliability: OK 00:06:51.685 Read Only: No 00:06:51.685 Volatile Memory Backup: OK 00:06:51.685 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.685 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.685 Available Spare: 0% 00:06:51.685 Available Spare Threshold: 0% 00:06:51.685 Life Percentage Used: 0% 00:06:51.685 Data Units Read: 709 00:06:51.685 Data Units Written: 637 00:06:51.685 Host Read Commands: 39041 00:06:51.685 Host Write Commands: 38827 00:06:51.685 Controller Busy Time: 0 minutes 00:06:51.685 Power Cycles: 0 00:06:51.685 Power On Hours: 0 hours 00:06:51.685 Unsafe Shutdowns: 0 00:06:51.685 Unrecoverable Media Errors: 0 00:06:51.685 Lifetime Error Log Entries: 0 00:06:51.685 Warning Temperature Time: 0 minutes 00:06:51.685 Critical Temperature Time: 0 minutes 00:06:51.685 00:06:51.685 Number of Queues 00:06:51.685 ================ 00:06:51.685 Number of I/O Submission Queues: 64 00:06:51.685 Number of I/O Completion Queues: 64 00:06:51.685 00:06:51.685 ZNS Specific Controller Data 00:06:51.685 ============================ 00:06:51.685 Zone Append Size Limit: 0 00:06:51.685 00:06:51.685 00:06:51.685 Active Namespaces 00:06:51.685 ================= 00:06:51.685 Namespace ID:1 00:06:51.685 Error Recovery Timeout: Unlimited 00:06:51.685 Command Set Identifier: NVM (00h) 00:06:51.685 Deallocate: Supported 00:06:51.685 Deallocated/Unwritten Error: Supported 00:06:51.685 Deallocated Read Value: All 0x00 00:06:51.685 Deallocate in Write Zeroes: Not Supported 00:06:51.685 Deallocated Guard Field: 0xFFFF 00:06:51.685 Flush: Supported 00:06:51.685 Reservation: Not Supported 00:06:51.685 Metadata Transferred as: Separate Metadata Buffer 00:06:51.685 Namespace Sharing Capabilities: Private 00:06:51.685 Size (in LBAs): 1548666 (5GiB) 00:06:51.685 Capacity (in LBAs): 1548666 (5GiB) 00:06:51.685 Utilization (in LBAs): 1548666 (5GiB) 00:06:51.685 Thin Provisioning: Not Supported 00:06:51.685 Per-NS Atomic Units: No 00:06:51.685 Maximum Single Source Range Length: 128 00:06:51.685 Maximum Copy Length: 128 00:06:51.685 Maximum Source Range Count: 128 00:06:51.685 NGUID/EUI64 Never Reused: No 00:06:51.685 Namespace Write Protected: No 00:06:51.685 Number of LBA Formats: 8 00:06:51.685 Current LBA Format: LBA Format #07 00:06:51.685 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.685 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.685 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.685 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.685 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.685 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.685 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.685 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.685 00:06:51.685 NVM Specific Namespace Data 00:06:51.685 =========================== 00:06:51.685 Logical Block Storage Tag Mask: 0 00:06:51.685 Protection Information Capabilities: 00:06:51.685 16b Guard Protection Information Storage Tag Support: No 00:06:51.685 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.685 Storage Tag Check Read Support: No 00:06:51.685 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.685 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:51.685 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:06:51.944 ===================================================== 00:06:51.944 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:51.944 ===================================================== 00:06:51.944 Controller Capabilities/Features 00:06:51.944 ================================ 00:06:51.944 Vendor ID: 1b36 00:06:51.944 Subsystem Vendor ID: 1af4 00:06:51.944 Serial Number: 12341 00:06:51.944 Model Number: QEMU NVMe Ctrl 00:06:51.944 Firmware Version: 8.0.0 00:06:51.944 Recommended Arb Burst: 6 00:06:51.944 IEEE OUI Identifier: 00 54 52 00:06:51.944 Multi-path I/O 00:06:51.944 May have multiple subsystem ports: No 00:06:51.944 May have multiple controllers: No 00:06:51.944 Associated with SR-IOV VF: No 00:06:51.944 Max Data Transfer Size: 524288 00:06:51.944 Max Number of Namespaces: 256 00:06:51.944 Max Number of I/O Queues: 64 00:06:51.944 NVMe Specification Version (VS): 1.4 00:06:51.944 NVMe Specification Version (Identify): 1.4 00:06:51.944 Maximum Queue Entries: 2048 00:06:51.944 Contiguous Queues Required: Yes 00:06:51.944 Arbitration Mechanisms Supported 00:06:51.944 Weighted Round Robin: Not Supported 00:06:51.944 Vendor Specific: Not Supported 00:06:51.944 Reset Timeout: 7500 ms 00:06:51.944 Doorbell Stride: 4 bytes 00:06:51.944 NVM Subsystem Reset: Not Supported 00:06:51.944 Command Sets Supported 00:06:51.944 NVM Command Set: Supported 00:06:51.944 Boot Partition: Not Supported 00:06:51.944 Memory Page Size Minimum: 4096 bytes 00:06:51.944 Memory Page Size Maximum: 65536 bytes 00:06:51.944 Persistent Memory Region: Not Supported 00:06:51.944 Optional Asynchronous Events Supported 00:06:51.944 Namespace Attribute Notices: Supported 00:06:51.944 Firmware Activation Notices: Not Supported 00:06:51.944 ANA Change Notices: Not Supported 00:06:51.944 PLE Aggregate Log Change Notices: Not Supported 00:06:51.944 LBA Status Info Alert Notices: Not Supported 00:06:51.944 EGE Aggregate Log Change Notices: Not Supported 00:06:51.944 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.944 Zone Descriptor Change Notices: Not Supported 00:06:51.944 Discovery Log Change Notices: Not Supported 00:06:51.944 Controller Attributes 00:06:51.944 128-bit Host Identifier: Not Supported 00:06:51.944 Non-Operational Permissive Mode: Not Supported 00:06:51.944 NVM Sets: Not Supported 00:06:51.944 Read Recovery Levels: Not Supported 00:06:51.944 Endurance Groups: Not Supported 00:06:51.944 Predictable Latency Mode: Not Supported 00:06:51.944 Traffic Based Keep ALive: Not Supported 00:06:51.944 Namespace Granularity: Not Supported 00:06:51.944 SQ Associations: Not Supported 00:06:51.944 UUID List: Not Supported 00:06:51.944 Multi-Domain Subsystem: Not Supported 00:06:51.944 Fixed Capacity Management: Not Supported 00:06:51.944 Variable Capacity Management: Not Supported 00:06:51.944 Delete Endurance Group: Not Supported 00:06:51.944 Delete NVM Set: Not Supported 00:06:51.944 Extended LBA Formats Supported: Supported 00:06:51.944 Flexible Data Placement Supported: Not Supported 00:06:51.944 00:06:51.944 Controller Memory Buffer Support 00:06:51.944 ================================ 00:06:51.944 Supported: No 00:06:51.944 00:06:51.944 Persistent Memory Region Support 00:06:51.944 ================================ 00:06:51.944 Supported: No 00:06:51.944 00:06:51.944 Admin Command Set Attributes 00:06:51.944 ============================ 00:06:51.944 Security Send/Receive: Not Supported 00:06:51.944 Format NVM: Supported 00:06:51.944 Firmware Activate/Download: Not Supported 00:06:51.944 Namespace Management: Supported 00:06:51.944 Device Self-Test: Not Supported 00:06:51.944 Directives: Supported 00:06:51.944 NVMe-MI: Not Supported 00:06:51.944 Virtualization Management: Not Supported 00:06:51.944 Doorbell Buffer Config: Supported 00:06:51.944 Get LBA Status Capability: Not Supported 00:06:51.944 Command & Feature Lockdown Capability: Not Supported 00:06:51.944 Abort Command Limit: 4 00:06:51.944 Async Event Request Limit: 4 00:06:51.944 Number of Firmware Slots: N/A 00:06:51.944 Firmware Slot 1 Read-Only: N/A 00:06:51.944 Firmware Activation Without Reset: N/A 00:06:51.944 Multiple Update Detection Support: N/A 00:06:51.944 Firmware Update Granularity: No Information Provided 00:06:51.944 Per-Namespace SMART Log: Yes 00:06:51.944 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.944 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:51.944 Command Effects Log Page: Supported 00:06:51.944 Get Log Page Extended Data: Supported 00:06:51.944 Telemetry Log Pages: Not Supported 00:06:51.944 Persistent Event Log Pages: Not Supported 00:06:51.944 Supported Log Pages Log Page: May Support 00:06:51.945 Commands Supported & Effects Log Page: Not Supported 00:06:51.945 Feature Identifiers & Effects Log Page:May Support 00:06:51.945 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.945 Data Area 4 for Telemetry Log: Not Supported 00:06:51.945 Error Log Page Entries Supported: 1 00:06:51.945 Keep Alive: Not Supported 00:06:51.945 00:06:51.945 NVM Command Set Attributes 00:06:51.945 ========================== 00:06:51.945 Submission Queue Entry Size 00:06:51.945 Max: 64 00:06:51.945 Min: 64 00:06:51.945 Completion Queue Entry Size 00:06:51.945 Max: 16 00:06:51.945 Min: 16 00:06:51.945 Number of Namespaces: 256 00:06:51.945 Compare Command: Supported 00:06:51.945 Write Uncorrectable Command: Not Supported 00:06:51.945 Dataset Management Command: Supported 00:06:51.945 Write Zeroes Command: Supported 00:06:51.945 Set Features Save Field: Supported 00:06:51.945 Reservations: Not Supported 00:06:51.945 Timestamp: Supported 00:06:51.945 Copy: Supported 00:06:51.945 Volatile Write Cache: Present 00:06:51.945 Atomic Write Unit (Normal): 1 00:06:51.945 Atomic Write Unit (PFail): 1 00:06:51.945 Atomic Compare & Write Unit: 1 00:06:51.945 Fused Compare & Write: Not Supported 00:06:51.945 Scatter-Gather List 00:06:51.945 SGL Command Set: Supported 00:06:51.945 SGL Keyed: Not Supported 00:06:51.945 SGL Bit Bucket Descriptor: Not Supported 00:06:51.945 SGL Metadata Pointer: Not Supported 00:06:51.945 Oversized SGL: Not Supported 00:06:51.945 SGL Metadata Address: Not Supported 00:06:51.945 SGL Offset: Not Supported 00:06:51.945 Transport SGL Data Block: Not Supported 00:06:51.945 Replay Protected Memory Block: Not Supported 00:06:51.945 00:06:51.945 Firmware Slot Information 00:06:51.945 ========================= 00:06:51.945 Active slot: 1 00:06:51.945 Slot 1 Firmware Revision: 1.0 00:06:51.945 00:06:51.945 00:06:51.945 Commands Supported and Effects 00:06:51.945 ============================== 00:06:51.945 Admin Commands 00:06:51.945 -------------- 00:06:51.945 Delete I/O Submission Queue (00h): Supported 00:06:51.945 Create I/O Submission Queue (01h): Supported 00:06:51.945 Get Log Page (02h): Supported 00:06:51.945 Delete I/O Completion Queue (04h): Supported 00:06:51.945 Create I/O Completion Queue (05h): Supported 00:06:51.945 Identify (06h): Supported 00:06:51.945 Abort (08h): Supported 00:06:51.945 Set Features (09h): Supported 00:06:51.945 Get Features (0Ah): Supported 00:06:51.945 Asynchronous Event Request (0Ch): Supported 00:06:51.945 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.945 Directive Send (19h): Supported 00:06:51.945 Directive Receive (1Ah): Supported 00:06:51.945 Virtualization Management (1Ch): Supported 00:06:51.945 Doorbell Buffer Config (7Ch): Supported 00:06:51.945 Format NVM (80h): Supported LBA-Change 00:06:51.945 I/O Commands 00:06:51.945 ------------ 00:06:51.945 Flush (00h): Supported LBA-Change 00:06:51.945 Write (01h): Supported LBA-Change 00:06:51.945 Read (02h): Supported 00:06:51.945 Compare (05h): Supported 00:06:51.945 Write Zeroes (08h): Supported LBA-Change 00:06:51.945 Dataset Management (09h): Supported LBA-Change 00:06:51.945 Unknown (0Ch): Supported 00:06:51.945 Unknown (12h): Supported 00:06:51.945 Copy (19h): Supported LBA-Change 00:06:51.945 Unknown (1Dh): Supported LBA-Change 00:06:51.945 00:06:51.945 Error Log 00:06:51.945 ========= 00:06:51.945 00:06:51.945 Arbitration 00:06:51.945 =========== 00:06:51.945 Arbitration Burst: no limit 00:06:51.945 00:06:51.945 Power Management 00:06:51.945 ================ 00:06:51.945 Number of Power States: 1 00:06:51.945 Current Power State: Power State #0 00:06:51.945 Power State #0: 00:06:51.945 Max Power: 25.00 W 00:06:51.945 Non-Operational State: Operational 00:06:51.945 Entry Latency: 16 microseconds 00:06:51.945 Exit Latency: 4 microseconds 00:06:51.945 Relative Read Throughput: 0 00:06:51.945 Relative Read Latency: 0 00:06:51.945 Relative Write Throughput: 0 00:06:51.945 Relative Write Latency: 0 00:06:51.945 Idle Power: Not Reported 00:06:51.945 Active Power: Not Reported 00:06:51.945 Non-Operational Permissive Mode: Not Supported 00:06:51.945 00:06:51.945 Health Information 00:06:51.945 ================== 00:06:51.945 Critical Warnings: 00:06:51.945 Available Spare Space: OK 00:06:51.945 Temperature: OK 00:06:51.945 Device Reliability: OK 00:06:51.945 Read Only: No 00:06:51.945 Volatile Memory Backup: OK 00:06:51.945 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.945 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.945 Available Spare: 0% 00:06:51.945 Available Spare Threshold: 0% 00:06:51.945 Life Percentage Used: 0% 00:06:51.945 Data Units Read: 1043 00:06:51.945 Data Units Written: 904 00:06:51.945 Host Read Commands: 56283 00:06:51.945 Host Write Commands: 54953 00:06:51.945 Controller Busy Time: 0 minutes 00:06:51.945 Power Cycles: 0 00:06:51.945 Power On Hours: 0 hours 00:06:51.945 Unsafe Shutdowns: 0 00:06:51.945 Unrecoverable Media Errors: 0 00:06:51.945 Lifetime Error Log Entries: 0 00:06:51.945 Warning Temperature Time: 0 minutes 00:06:51.945 Critical Temperature Time: 0 minutes 00:06:51.945 00:06:51.945 Number of Queues 00:06:51.945 ================ 00:06:51.945 Number of I/O Submission Queues: 64 00:06:51.945 Number of I/O Completion Queues: 64 00:06:51.945 00:06:51.945 ZNS Specific Controller Data 00:06:51.945 ============================ 00:06:51.945 Zone Append Size Limit: 0 00:06:51.945 00:06:51.945 00:06:51.945 Active Namespaces 00:06:51.945 ================= 00:06:51.945 Namespace ID:1 00:06:51.945 Error Recovery Timeout: Unlimited 00:06:51.945 Command Set Identifier: NVM (00h) 00:06:51.945 Deallocate: Supported 00:06:51.945 Deallocated/Unwritten Error: Supported 00:06:51.945 Deallocated Read Value: All 0x00 00:06:51.945 Deallocate in Write Zeroes: Not Supported 00:06:51.945 Deallocated Guard Field: 0xFFFF 00:06:51.945 Flush: Supported 00:06:51.945 Reservation: Not Supported 00:06:51.945 Namespace Sharing Capabilities: Private 00:06:51.945 Size (in LBAs): 1310720 (5GiB) 00:06:51.945 Capacity (in LBAs): 1310720 (5GiB) 00:06:51.945 Utilization (in LBAs): 1310720 (5GiB) 00:06:51.945 Thin Provisioning: Not Supported 00:06:51.945 Per-NS Atomic Units: No 00:06:51.945 Maximum Single Source Range Length: 128 00:06:51.945 Maximum Copy Length: 128 00:06:51.945 Maximum Source Range Count: 128 00:06:51.945 NGUID/EUI64 Never Reused: No 00:06:51.945 Namespace Write Protected: No 00:06:51.945 Number of LBA Formats: 8 00:06:51.945 Current LBA Format: LBA Format #04 00:06:51.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.945 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.945 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.945 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.945 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.945 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.945 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.945 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.945 00:06:51.945 NVM Specific Namespace Data 00:06:51.945 =========================== 00:06:51.945 Logical Block Storage Tag Mask: 0 00:06:51.945 Protection Information Capabilities: 00:06:51.945 16b Guard Protection Information Storage Tag Support: No 00:06:51.945 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.945 Storage Tag Check Read Support: No 00:06:51.945 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.945 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:51.945 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:06:52.205 ===================================================== 00:06:52.205 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:52.205 ===================================================== 00:06:52.205 Controller Capabilities/Features 00:06:52.205 ================================ 00:06:52.205 Vendor ID: 1b36 00:06:52.205 Subsystem Vendor ID: 1af4 00:06:52.205 Serial Number: 12342 00:06:52.205 Model Number: QEMU NVMe Ctrl 00:06:52.205 Firmware Version: 8.0.0 00:06:52.205 Recommended Arb Burst: 6 00:06:52.205 IEEE OUI Identifier: 00 54 52 00:06:52.205 Multi-path I/O 00:06:52.205 May have multiple subsystem ports: No 00:06:52.205 May have multiple controllers: No 00:06:52.205 Associated with SR-IOV VF: No 00:06:52.205 Max Data Transfer Size: 524288 00:06:52.205 Max Number of Namespaces: 256 00:06:52.205 Max Number of I/O Queues: 64 00:06:52.205 NVMe Specification Version (VS): 1.4 00:06:52.205 NVMe Specification Version (Identify): 1.4 00:06:52.205 Maximum Queue Entries: 2048 00:06:52.205 Contiguous Queues Required: Yes 00:06:52.205 Arbitration Mechanisms Supported 00:06:52.205 Weighted Round Robin: Not Supported 00:06:52.205 Vendor Specific: Not Supported 00:06:52.205 Reset Timeout: 7500 ms 00:06:52.205 Doorbell Stride: 4 bytes 00:06:52.205 NVM Subsystem Reset: Not Supported 00:06:52.205 Command Sets Supported 00:06:52.205 NVM Command Set: Supported 00:06:52.205 Boot Partition: Not Supported 00:06:52.205 Memory Page Size Minimum: 4096 bytes 00:06:52.205 Memory Page Size Maximum: 65536 bytes 00:06:52.205 Persistent Memory Region: Not Supported 00:06:52.205 Optional Asynchronous Events Supported 00:06:52.205 Namespace Attribute Notices: Supported 00:06:52.205 Firmware Activation Notices: Not Supported 00:06:52.205 ANA Change Notices: Not Supported 00:06:52.205 PLE Aggregate Log Change Notices: Not Supported 00:06:52.205 LBA Status Info Alert Notices: Not Supported 00:06:52.205 EGE Aggregate Log Change Notices: Not Supported 00:06:52.205 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.205 Zone Descriptor Change Notices: Not Supported 00:06:52.205 Discovery Log Change Notices: Not Supported 00:06:52.205 Controller Attributes 00:06:52.205 128-bit Host Identifier: Not Supported 00:06:52.205 Non-Operational Permissive Mode: Not Supported 00:06:52.205 NVM Sets: Not Supported 00:06:52.205 Read Recovery Levels: Not Supported 00:06:52.205 Endurance Groups: Not Supported 00:06:52.205 Predictable Latency Mode: Not Supported 00:06:52.205 Traffic Based Keep ALive: Not Supported 00:06:52.205 Namespace Granularity: Not Supported 00:06:52.205 SQ Associations: Not Supported 00:06:52.205 UUID List: Not Supported 00:06:52.205 Multi-Domain Subsystem: Not Supported 00:06:52.205 Fixed Capacity Management: Not Supported 00:06:52.205 Variable Capacity Management: Not Supported 00:06:52.205 Delete Endurance Group: Not Supported 00:06:52.205 Delete NVM Set: Not Supported 00:06:52.205 Extended LBA Formats Supported: Supported 00:06:52.205 Flexible Data Placement Supported: Not Supported 00:06:52.205 00:06:52.205 Controller Memory Buffer Support 00:06:52.205 ================================ 00:06:52.205 Supported: No 00:06:52.205 00:06:52.205 Persistent Memory Region Support 00:06:52.205 ================================ 00:06:52.205 Supported: No 00:06:52.205 00:06:52.205 Admin Command Set Attributes 00:06:52.205 ============================ 00:06:52.205 Security Send/Receive: Not Supported 00:06:52.205 Format NVM: Supported 00:06:52.205 Firmware Activate/Download: Not Supported 00:06:52.205 Namespace Management: Supported 00:06:52.205 Device Self-Test: Not Supported 00:06:52.205 Directives: Supported 00:06:52.205 NVMe-MI: Not Supported 00:06:52.205 Virtualization Management: Not Supported 00:06:52.205 Doorbell Buffer Config: Supported 00:06:52.205 Get LBA Status Capability: Not Supported 00:06:52.205 Command & Feature Lockdown Capability: Not Supported 00:06:52.205 Abort Command Limit: 4 00:06:52.205 Async Event Request Limit: 4 00:06:52.205 Number of Firmware Slots: N/A 00:06:52.205 Firmware Slot 1 Read-Only: N/A 00:06:52.205 Firmware Activation Without Reset: N/A 00:06:52.205 Multiple Update Detection Support: N/A 00:06:52.205 Firmware Update Granularity: No Information Provided 00:06:52.205 Per-Namespace SMART Log: Yes 00:06:52.205 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.205 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:52.205 Command Effects Log Page: Supported 00:06:52.205 Get Log Page Extended Data: Supported 00:06:52.205 Telemetry Log Pages: Not Supported 00:06:52.205 Persistent Event Log Pages: Not Supported 00:06:52.205 Supported Log Pages Log Page: May Support 00:06:52.205 Commands Supported & Effects Log Page: Not Supported 00:06:52.205 Feature Identifiers & Effects Log Page:May Support 00:06:52.205 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.205 Data Area 4 for Telemetry Log: Not Supported 00:06:52.205 Error Log Page Entries Supported: 1 00:06:52.205 Keep Alive: Not Supported 00:06:52.205 00:06:52.205 NVM Command Set Attributes 00:06:52.205 ========================== 00:06:52.205 Submission Queue Entry Size 00:06:52.205 Max: 64 00:06:52.205 Min: 64 00:06:52.205 Completion Queue Entry Size 00:06:52.206 Max: 16 00:06:52.206 Min: 16 00:06:52.206 Number of Namespaces: 256 00:06:52.206 Compare Command: Supported 00:06:52.206 Write Uncorrectable Command: Not Supported 00:06:52.206 Dataset Management Command: Supported 00:06:52.206 Write Zeroes Command: Supported 00:06:52.206 Set Features Save Field: Supported 00:06:52.206 Reservations: Not Supported 00:06:52.206 Timestamp: Supported 00:06:52.206 Copy: Supported 00:06:52.206 Volatile Write Cache: Present 00:06:52.206 Atomic Write Unit (Normal): 1 00:06:52.206 Atomic Write Unit (PFail): 1 00:06:52.206 Atomic Compare & Write Unit: 1 00:06:52.206 Fused Compare & Write: Not Supported 00:06:52.206 Scatter-Gather List 00:06:52.206 SGL Command Set: Supported 00:06:52.206 SGL Keyed: Not Supported 00:06:52.206 SGL Bit Bucket Descriptor: Not Supported 00:06:52.206 SGL Metadata Pointer: Not Supported 00:06:52.206 Oversized SGL: Not Supported 00:06:52.206 SGL Metadata Address: Not Supported 00:06:52.206 SGL Offset: Not Supported 00:06:52.206 Transport SGL Data Block: Not Supported 00:06:52.206 Replay Protected Memory Block: Not Supported 00:06:52.206 00:06:52.206 Firmware Slot Information 00:06:52.206 ========================= 00:06:52.206 Active slot: 1 00:06:52.206 Slot 1 Firmware Revision: 1.0 00:06:52.206 00:06:52.206 00:06:52.206 Commands Supported and Effects 00:06:52.206 ============================== 00:06:52.206 Admin Commands 00:06:52.206 -------------- 00:06:52.206 Delete I/O Submission Queue (00h): Supported 00:06:52.206 Create I/O Submission Queue (01h): Supported 00:06:52.206 Get Log Page (02h): Supported 00:06:52.206 Delete I/O Completion Queue (04h): Supported 00:06:52.206 Create I/O Completion Queue (05h): Supported 00:06:52.206 Identify (06h): Supported 00:06:52.206 Abort (08h): Supported 00:06:52.206 Set Features (09h): Supported 00:06:52.206 Get Features (0Ah): Supported 00:06:52.206 Asynchronous Event Request (0Ch): Supported 00:06:52.206 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.206 Directive Send (19h): Supported 00:06:52.206 Directive Receive (1Ah): Supported 00:06:52.206 Virtualization Management (1Ch): Supported 00:06:52.206 Doorbell Buffer Config (7Ch): Supported 00:06:52.206 Format NVM (80h): Supported LBA-Change 00:06:52.206 I/O Commands 00:06:52.206 ------------ 00:06:52.206 Flush (00h): Supported LBA-Change 00:06:52.206 Write (01h): Supported LBA-Change 00:06:52.206 Read (02h): Supported 00:06:52.206 Compare (05h): Supported 00:06:52.206 Write Zeroes (08h): Supported LBA-Change 00:06:52.206 Dataset Management (09h): Supported LBA-Change 00:06:52.206 Unknown (0Ch): Supported 00:06:52.206 Unknown (12h): Supported 00:06:52.206 Copy (19h): Supported LBA-Change 00:06:52.206 Unknown (1Dh): Supported LBA-Change 00:06:52.206 00:06:52.206 Error Log 00:06:52.206 ========= 00:06:52.206 00:06:52.206 Arbitration 00:06:52.206 =========== 00:06:52.206 Arbitration Burst: no limit 00:06:52.206 00:06:52.206 Power Management 00:06:52.206 ================ 00:06:52.206 Number of Power States: 1 00:06:52.206 Current Power State: Power State #0 00:06:52.206 Power State #0: 00:06:52.206 Max Power: 25.00 W 00:06:52.206 Non-Operational State: Operational 00:06:52.206 Entry Latency: 16 microseconds 00:06:52.206 Exit Latency: 4 microseconds 00:06:52.206 Relative Read Throughput: 0 00:06:52.206 Relative Read Latency: 0 00:06:52.206 Relative Write Throughput: 0 00:06:52.206 Relative Write Latency: 0 00:06:52.206 Idle Power: Not Reported 00:06:52.206 Active Power: Not Reported 00:06:52.206 Non-Operational Permissive Mode: Not Supported 00:06:52.206 00:06:52.206 Health Information 00:06:52.206 ================== 00:06:52.206 Critical Warnings: 00:06:52.206 Available Spare Space: OK 00:06:52.206 Temperature: OK 00:06:52.206 Device Reliability: OK 00:06:52.206 Read Only: No 00:06:52.206 Volatile Memory Backup: OK 00:06:52.206 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.206 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.206 Available Spare: 0% 00:06:52.206 Available Spare Threshold: 0% 00:06:52.206 Life Percentage Used: 0% 00:06:52.206 Data Units Read: 2246 00:06:52.206 Data Units Written: 2033 00:06:52.206 Host Read Commands: 118404 00:06:52.206 Host Write Commands: 116677 00:06:52.206 Controller Busy Time: 0 minutes 00:06:52.206 Power Cycles: 0 00:06:52.206 Power On Hours: 0 hours 00:06:52.206 Unsafe Shutdowns: 0 00:06:52.206 Unrecoverable Media Errors: 0 00:06:52.206 Lifetime Error Log Entries: 0 00:06:52.206 Warning Temperature Time: 0 minutes 00:06:52.206 Critical Temperature Time: 0 minutes 00:06:52.206 00:06:52.206 Number of Queues 00:06:52.206 ================ 00:06:52.206 Number of I/O Submission Queues: 64 00:06:52.206 Number of I/O Completion Queues: 64 00:06:52.206 00:06:52.206 ZNS Specific Controller Data 00:06:52.206 ============================ 00:06:52.206 Zone Append Size Limit: 0 00:06:52.206 00:06:52.206 00:06:52.206 Active Namespaces 00:06:52.206 ================= 00:06:52.206 Namespace ID:1 00:06:52.206 Error Recovery Timeout: Unlimited 00:06:52.206 Command Set Identifier: NVM (00h) 00:06:52.206 Deallocate: Supported 00:06:52.206 Deallocated/Unwritten Error: Supported 00:06:52.206 Deallocated Read Value: All 0x00 00:06:52.206 Deallocate in Write Zeroes: Not Supported 00:06:52.206 Deallocated Guard Field: 0xFFFF 00:06:52.206 Flush: Supported 00:06:52.206 Reservation: Not Supported 00:06:52.206 Namespace Sharing Capabilities: Private 00:06:52.206 Size (in LBAs): 1048576 (4GiB) 00:06:52.206 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.206 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.206 Thin Provisioning: Not Supported 00:06:52.206 Per-NS Atomic Units: No 00:06:52.206 Maximum Single Source Range Length: 128 00:06:52.206 Maximum Copy Length: 128 00:06:52.206 Maximum Source Range Count: 128 00:06:52.206 NGUID/EUI64 Never Reused: No 00:06:52.206 Namespace Write Protected: No 00:06:52.206 Number of LBA Formats: 8 00:06:52.206 Current LBA Format: LBA Format #04 00:06:52.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.206 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.206 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.206 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.206 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.206 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.206 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.206 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.206 00:06:52.206 NVM Specific Namespace Data 00:06:52.206 =========================== 00:06:52.206 Logical Block Storage Tag Mask: 0 00:06:52.206 Protection Information Capabilities: 00:06:52.206 16b Guard Protection Information Storage Tag Support: No 00:06:52.206 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.206 Storage Tag Check Read Support: No 00:06:52.206 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.206 Namespace ID:2 00:06:52.206 Error Recovery Timeout: Unlimited 00:06:52.206 Command Set Identifier: NVM (00h) 00:06:52.206 Deallocate: Supported 00:06:52.206 Deallocated/Unwritten Error: Supported 00:06:52.206 Deallocated Read Value: All 0x00 00:06:52.206 Deallocate in Write Zeroes: Not Supported 00:06:52.206 Deallocated Guard Field: 0xFFFF 00:06:52.206 Flush: Supported 00:06:52.206 Reservation: Not Supported 00:06:52.206 Namespace Sharing Capabilities: Private 00:06:52.206 Size (in LBAs): 1048576 (4GiB) 00:06:52.206 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.206 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.206 Thin Provisioning: Not Supported 00:06:52.206 Per-NS Atomic Units: No 00:06:52.206 Maximum Single Source Range Length: 128 00:06:52.206 Maximum Copy Length: 128 00:06:52.206 Maximum Source Range Count: 128 00:06:52.206 NGUID/EUI64 Never Reused: No 00:06:52.206 Namespace Write Protected: No 00:06:52.206 Number of LBA Formats: 8 00:06:52.206 Current LBA Format: LBA Format #04 00:06:52.207 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.207 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.207 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.207 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.207 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.207 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.207 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.207 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.207 00:06:52.207 NVM Specific Namespace Data 00:06:52.207 =========================== 00:06:52.207 Logical Block Storage Tag Mask: 0 00:06:52.207 Protection Information Capabilities: 00:06:52.207 16b Guard Protection Information Storage Tag Support: No 00:06:52.207 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.207 Storage Tag Check Read Support: No 00:06:52.207 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Namespace ID:3 00:06:52.207 Error Recovery Timeout: Unlimited 00:06:52.207 Command Set Identifier: NVM (00h) 00:06:52.207 Deallocate: Supported 00:06:52.207 Deallocated/Unwritten Error: Supported 00:06:52.207 Deallocated Read Value: All 0x00 00:06:52.207 Deallocate in Write Zeroes: Not Supported 00:06:52.207 Deallocated Guard Field: 0xFFFF 00:06:52.207 Flush: Supported 00:06:52.207 Reservation: Not Supported 00:06:52.207 Namespace Sharing Capabilities: Private 00:06:52.207 Size (in LBAs): 1048576 (4GiB) 00:06:52.207 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.207 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.207 Thin Provisioning: Not Supported 00:06:52.207 Per-NS Atomic Units: No 00:06:52.207 Maximum Single Source Range Length: 128 00:06:52.207 Maximum Copy Length: 128 00:06:52.207 Maximum Source Range Count: 128 00:06:52.207 NGUID/EUI64 Never Reused: No 00:06:52.207 Namespace Write Protected: No 00:06:52.207 Number of LBA Formats: 8 00:06:52.207 Current LBA Format: LBA Format #04 00:06:52.207 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.207 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.207 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.207 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.207 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.207 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.207 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.207 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.207 00:06:52.207 NVM Specific Namespace Data 00:06:52.207 =========================== 00:06:52.207 Logical Block Storage Tag Mask: 0 00:06:52.207 Protection Information Capabilities: 00:06:52.207 16b Guard Protection Information Storage Tag Support: No 00:06:52.207 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.207 Storage Tag Check Read Support: No 00:06:52.207 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.207 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:52.207 06:05:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:06:52.466 ===================================================== 00:06:52.466 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:52.466 ===================================================== 00:06:52.466 Controller Capabilities/Features 00:06:52.466 ================================ 00:06:52.466 Vendor ID: 1b36 00:06:52.466 Subsystem Vendor ID: 1af4 00:06:52.466 Serial Number: 12343 00:06:52.466 Model Number: QEMU NVMe Ctrl 00:06:52.466 Firmware Version: 8.0.0 00:06:52.466 Recommended Arb Burst: 6 00:06:52.466 IEEE OUI Identifier: 00 54 52 00:06:52.466 Multi-path I/O 00:06:52.466 May have multiple subsystem ports: No 00:06:52.466 May have multiple controllers: Yes 00:06:52.466 Associated with SR-IOV VF: No 00:06:52.466 Max Data Transfer Size: 524288 00:06:52.466 Max Number of Namespaces: 256 00:06:52.466 Max Number of I/O Queues: 64 00:06:52.466 NVMe Specification Version (VS): 1.4 00:06:52.466 NVMe Specification Version (Identify): 1.4 00:06:52.466 Maximum Queue Entries: 2048 00:06:52.466 Contiguous Queues Required: Yes 00:06:52.466 Arbitration Mechanisms Supported 00:06:52.466 Weighted Round Robin: Not Supported 00:06:52.466 Vendor Specific: Not Supported 00:06:52.466 Reset Timeout: 7500 ms 00:06:52.466 Doorbell Stride: 4 bytes 00:06:52.466 NVM Subsystem Reset: Not Supported 00:06:52.466 Command Sets Supported 00:06:52.466 NVM Command Set: Supported 00:06:52.466 Boot Partition: Not Supported 00:06:52.466 Memory Page Size Minimum: 4096 bytes 00:06:52.466 Memory Page Size Maximum: 65536 bytes 00:06:52.466 Persistent Memory Region: Not Supported 00:06:52.466 Optional Asynchronous Events Supported 00:06:52.466 Namespace Attribute Notices: Supported 00:06:52.466 Firmware Activation Notices: Not Supported 00:06:52.466 ANA Change Notices: Not Supported 00:06:52.466 PLE Aggregate Log Change Notices: Not Supported 00:06:52.466 LBA Status Info Alert Notices: Not Supported 00:06:52.466 EGE Aggregate Log Change Notices: Not Supported 00:06:52.466 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.466 Zone Descriptor Change Notices: Not Supported 00:06:52.466 Discovery Log Change Notices: Not Supported 00:06:52.466 Controller Attributes 00:06:52.466 128-bit Host Identifier: Not Supported 00:06:52.466 Non-Operational Permissive Mode: Not Supported 00:06:52.466 NVM Sets: Not Supported 00:06:52.466 Read Recovery Levels: Not Supported 00:06:52.466 Endurance Groups: Supported 00:06:52.466 Predictable Latency Mode: Not Supported 00:06:52.466 Traffic Based Keep ALive: Not Supported 00:06:52.466 Namespace Granularity: Not Supported 00:06:52.466 SQ Associations: Not Supported 00:06:52.466 UUID List: Not Supported 00:06:52.466 Multi-Domain Subsystem: Not Supported 00:06:52.466 Fixed Capacity Management: Not Supported 00:06:52.466 Variable Capacity Management: Not Supported 00:06:52.466 Delete Endurance Group: Not Supported 00:06:52.466 Delete NVM Set: Not Supported 00:06:52.466 Extended LBA Formats Supported: Supported 00:06:52.466 Flexible Data Placement Supported: Supported 00:06:52.466 00:06:52.466 Controller Memory Buffer Support 00:06:52.466 ================================ 00:06:52.466 Supported: No 00:06:52.466 00:06:52.466 Persistent Memory Region Support 00:06:52.466 ================================ 00:06:52.466 Supported: No 00:06:52.466 00:06:52.466 Admin Command Set Attributes 00:06:52.466 ============================ 00:06:52.466 Security Send/Receive: Not Supported 00:06:52.466 Format NVM: Supported 00:06:52.466 Firmware Activate/Download: Not Supported 00:06:52.466 Namespace Management: Supported 00:06:52.466 Device Self-Test: Not Supported 00:06:52.466 Directives: Supported 00:06:52.466 NVMe-MI: Not Supported 00:06:52.466 Virtualization Management: Not Supported 00:06:52.466 Doorbell Buffer Config: Supported 00:06:52.466 Get LBA Status Capability: Not Supported 00:06:52.466 Command & Feature Lockdown Capability: Not Supported 00:06:52.466 Abort Command Limit: 4 00:06:52.466 Async Event Request Limit: 4 00:06:52.466 Number of Firmware Slots: N/A 00:06:52.466 Firmware Slot 1 Read-Only: N/A 00:06:52.466 Firmware Activation Without Reset: N/A 00:06:52.466 Multiple Update Detection Support: N/A 00:06:52.466 Firmware Update Granularity: No Information Provided 00:06:52.466 Per-Namespace SMART Log: Yes 00:06:52.466 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.466 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:52.466 Command Effects Log Page: Supported 00:06:52.466 Get Log Page Extended Data: Supported 00:06:52.466 Telemetry Log Pages: Not Supported 00:06:52.466 Persistent Event Log Pages: Not Supported 00:06:52.466 Supported Log Pages Log Page: May Support 00:06:52.466 Commands Supported & Effects Log Page: Not Supported 00:06:52.466 Feature Identifiers & Effects Log Page:May Support 00:06:52.466 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.466 Data Area 4 for Telemetry Log: Not Supported 00:06:52.466 Error Log Page Entries Supported: 1 00:06:52.466 Keep Alive: Not Supported 00:06:52.466 00:06:52.466 NVM Command Set Attributes 00:06:52.466 ========================== 00:06:52.466 Submission Queue Entry Size 00:06:52.466 Max: 64 00:06:52.466 Min: 64 00:06:52.466 Completion Queue Entry Size 00:06:52.466 Max: 16 00:06:52.466 Min: 16 00:06:52.466 Number of Namespaces: 256 00:06:52.466 Compare Command: Supported 00:06:52.466 Write Uncorrectable Command: Not Supported 00:06:52.466 Dataset Management Command: Supported 00:06:52.466 Write Zeroes Command: Supported 00:06:52.466 Set Features Save Field: Supported 00:06:52.466 Reservations: Not Supported 00:06:52.466 Timestamp: Supported 00:06:52.466 Copy: Supported 00:06:52.466 Volatile Write Cache: Present 00:06:52.466 Atomic Write Unit (Normal): 1 00:06:52.466 Atomic Write Unit (PFail): 1 00:06:52.466 Atomic Compare & Write Unit: 1 00:06:52.466 Fused Compare & Write: Not Supported 00:06:52.466 Scatter-Gather List 00:06:52.466 SGL Command Set: Supported 00:06:52.466 SGL Keyed: Not Supported 00:06:52.466 SGL Bit Bucket Descriptor: Not Supported 00:06:52.466 SGL Metadata Pointer: Not Supported 00:06:52.466 Oversized SGL: Not Supported 00:06:52.466 SGL Metadata Address: Not Supported 00:06:52.466 SGL Offset: Not Supported 00:06:52.466 Transport SGL Data Block: Not Supported 00:06:52.466 Replay Protected Memory Block: Not Supported 00:06:52.466 00:06:52.466 Firmware Slot Information 00:06:52.466 ========================= 00:06:52.466 Active slot: 1 00:06:52.466 Slot 1 Firmware Revision: 1.0 00:06:52.466 00:06:52.466 00:06:52.466 Commands Supported and Effects 00:06:52.466 ============================== 00:06:52.466 Admin Commands 00:06:52.466 -------------- 00:06:52.466 Delete I/O Submission Queue (00h): Supported 00:06:52.466 Create I/O Submission Queue (01h): Supported 00:06:52.466 Get Log Page (02h): Supported 00:06:52.466 Delete I/O Completion Queue (04h): Supported 00:06:52.466 Create I/O Completion Queue (05h): Supported 00:06:52.466 Identify (06h): Supported 00:06:52.466 Abort (08h): Supported 00:06:52.466 Set Features (09h): Supported 00:06:52.466 Get Features (0Ah): Supported 00:06:52.466 Asynchronous Event Request (0Ch): Supported 00:06:52.466 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.466 Directive Send (19h): Supported 00:06:52.466 Directive Receive (1Ah): Supported 00:06:52.466 Virtualization Management (1Ch): Supported 00:06:52.466 Doorbell Buffer Config (7Ch): Supported 00:06:52.466 Format NVM (80h): Supported LBA-Change 00:06:52.466 I/O Commands 00:06:52.466 ------------ 00:06:52.466 Flush (00h): Supported LBA-Change 00:06:52.466 Write (01h): Supported LBA-Change 00:06:52.466 Read (02h): Supported 00:06:52.466 Compare (05h): Supported 00:06:52.466 Write Zeroes (08h): Supported LBA-Change 00:06:52.466 Dataset Management (09h): Supported LBA-Change 00:06:52.466 Unknown (0Ch): Supported 00:06:52.466 Unknown (12h): Supported 00:06:52.466 Copy (19h): Supported LBA-Change 00:06:52.467 Unknown (1Dh): Supported LBA-Change 00:06:52.467 00:06:52.467 Error Log 00:06:52.467 ========= 00:06:52.467 00:06:52.467 Arbitration 00:06:52.467 =========== 00:06:52.467 Arbitration Burst: no limit 00:06:52.467 00:06:52.467 Power Management 00:06:52.467 ================ 00:06:52.467 Number of Power States: 1 00:06:52.467 Current Power State: Power State #0 00:06:52.467 Power State #0: 00:06:52.467 Max Power: 25.00 W 00:06:52.467 Non-Operational State: Operational 00:06:52.467 Entry Latency: 16 microseconds 00:06:52.467 Exit Latency: 4 microseconds 00:06:52.467 Relative Read Throughput: 0 00:06:52.467 Relative Read Latency: 0 00:06:52.467 Relative Write Throughput: 0 00:06:52.467 Relative Write Latency: 0 00:06:52.467 Idle Power: Not Reported 00:06:52.467 Active Power: Not Reported 00:06:52.467 Non-Operational Permissive Mode: Not Supported 00:06:52.467 00:06:52.467 Health Information 00:06:52.467 ================== 00:06:52.467 Critical Warnings: 00:06:52.467 Available Spare Space: OK 00:06:52.467 Temperature: OK 00:06:52.467 Device Reliability: OK 00:06:52.467 Read Only: No 00:06:52.467 Volatile Memory Backup: OK 00:06:52.467 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.467 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.467 Available Spare: 0% 00:06:52.467 Available Spare Threshold: 0% 00:06:52.467 Life Percentage Used: 0% 00:06:52.467 Data Units Read: 839 00:06:52.467 Data Units Written: 768 00:06:52.467 Host Read Commands: 40174 00:06:52.467 Host Write Commands: 39597 00:06:52.467 Controller Busy Time: 0 minutes 00:06:52.467 Power Cycles: 0 00:06:52.467 Power On Hours: 0 hours 00:06:52.467 Unsafe Shutdowns: 0 00:06:52.467 Unrecoverable Media Errors: 0 00:06:52.467 Lifetime Error Log Entries: 0 00:06:52.467 Warning Temperature Time: 0 minutes 00:06:52.467 Critical Temperature Time: 0 minutes 00:06:52.467 00:06:52.467 Number of Queues 00:06:52.467 ================ 00:06:52.467 Number of I/O Submission Queues: 64 00:06:52.467 Number of I/O Completion Queues: 64 00:06:52.467 00:06:52.467 ZNS Specific Controller Data 00:06:52.467 ============================ 00:06:52.467 Zone Append Size Limit: 0 00:06:52.467 00:06:52.467 00:06:52.467 Active Namespaces 00:06:52.467 ================= 00:06:52.467 Namespace ID:1 00:06:52.467 Error Recovery Timeout: Unlimited 00:06:52.467 Command Set Identifier: NVM (00h) 00:06:52.467 Deallocate: Supported 00:06:52.467 Deallocated/Unwritten Error: Supported 00:06:52.467 Deallocated Read Value: All 0x00 00:06:52.467 Deallocate in Write Zeroes: Not Supported 00:06:52.467 Deallocated Guard Field: 0xFFFF 00:06:52.467 Flush: Supported 00:06:52.467 Reservation: Not Supported 00:06:52.467 Namespace Sharing Capabilities: Multiple Controllers 00:06:52.467 Size (in LBAs): 262144 (1GiB) 00:06:52.467 Capacity (in LBAs): 262144 (1GiB) 00:06:52.467 Utilization (in LBAs): 262144 (1GiB) 00:06:52.467 Thin Provisioning: Not Supported 00:06:52.467 Per-NS Atomic Units: No 00:06:52.467 Maximum Single Source Range Length: 128 00:06:52.467 Maximum Copy Length: 128 00:06:52.467 Maximum Source Range Count: 128 00:06:52.467 NGUID/EUI64 Never Reused: No 00:06:52.467 Namespace Write Protected: No 00:06:52.467 Endurance group ID: 1 00:06:52.467 Number of LBA Formats: 8 00:06:52.467 Current LBA Format: LBA Format #04 00:06:52.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.467 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.467 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.467 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.467 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.467 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.467 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.467 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.467 00:06:52.467 Get Feature FDP: 00:06:52.467 ================ 00:06:52.467 Enabled: Yes 00:06:52.467 FDP configuration index: 0 00:06:52.467 00:06:52.467 FDP configurations log page 00:06:52.467 =========================== 00:06:52.467 Number of FDP configurations: 1 00:06:52.467 Version: 0 00:06:52.467 Size: 112 00:06:52.467 FDP Configuration Descriptor: 0 00:06:52.467 Descriptor Size: 96 00:06:52.467 Reclaim Group Identifier format: 2 00:06:52.467 FDP Volatile Write Cache: Not Present 00:06:52.467 FDP Configuration: Valid 00:06:52.467 Vendor Specific Size: 0 00:06:52.467 Number of Reclaim Groups: 2 00:06:52.467 Number of Recalim Unit Handles: 8 00:06:52.467 Max Placement Identifiers: 128 00:06:52.467 Number of Namespaces Suppprted: 256 00:06:52.467 Reclaim unit Nominal Size: 6000000 bytes 00:06:52.467 Estimated Reclaim Unit Time Limit: Not Reported 00:06:52.467 RUH Desc #000: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #001: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #002: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #003: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #004: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #005: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #006: RUH Type: Initially Isolated 00:06:52.467 RUH Desc #007: RUH Type: Initially Isolated 00:06:52.467 00:06:52.467 FDP reclaim unit handle usage log page 00:06:52.467 ====================================== 00:06:52.467 Number of Reclaim Unit Handles: 8 00:06:52.467 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:52.467 RUH Usage Desc #001: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #002: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #003: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #004: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #005: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #006: RUH Attributes: Unused 00:06:52.467 RUH Usage Desc #007: RUH Attributes: Unused 00:06:52.467 00:06:52.467 FDP statistics log page 00:06:52.467 ======================= 00:06:52.467 Host bytes with metadata written: 478060544 00:06:52.467 Media bytes with metadata written: 478113792 00:06:52.467 Media bytes erased: 0 00:06:52.467 00:06:52.467 FDP events log page 00:06:52.467 =================== 00:06:52.467 Number of FDP events: 0 00:06:52.467 00:06:52.467 NVM Specific Namespace Data 00:06:52.467 =========================== 00:06:52.467 Logical Block Storage Tag Mask: 0 00:06:52.467 Protection Information Capabilities: 00:06:52.467 16b Guard Protection Information Storage Tag Support: No 00:06:52.467 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.467 Storage Tag Check Read Support: No 00:06:52.467 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.467 00:06:52.467 real 0m1.180s 00:06:52.467 user 0m0.451s 00:06:52.467 sys 0m0.521s 00:06:52.467 06:05:11 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.467 06:05:11 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 ************************************ 00:06:52.467 END TEST nvme_identify 00:06:52.467 ************************************ 00:06:52.467 06:05:11 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:06:52.467 06:05:11 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.467 06:05:11 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.467 06:05:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 ************************************ 00:06:52.467 START TEST nvme_perf 00:06:52.467 ************************************ 00:06:52.467 06:05:12 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:06:52.467 06:05:12 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:06:53.844 Initializing NVMe Controllers 00:06:53.844 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:53.844 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:53.844 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:53.844 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:53.844 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:53.844 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:53.844 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:53.844 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:53.844 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:53.844 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:53.844 Initialization complete. Launching workers. 00:06:53.844 ======================================================== 00:06:53.844 Latency(us) 00:06:53.844 Device Information : IOPS MiB/s Average min max 00:06:53.844 PCIE (0000:00:10.0) NSID 1 from core 0: 19377.52 227.08 6612.91 5442.48 31119.38 00:06:53.844 PCIE (0000:00:11.0) NSID 1 from core 0: 19377.52 227.08 6603.41 5514.10 29550.41 00:06:53.844 PCIE (0000:00:13.0) NSID 1 from core 0: 19377.52 227.08 6592.78 5513.76 28054.83 00:06:53.844 PCIE (0000:00:12.0) NSID 1 from core 0: 19377.52 227.08 6580.74 5513.80 26259.96 00:06:53.844 PCIE (0000:00:12.0) NSID 2 from core 0: 19377.52 227.08 6569.95 5508.80 24533.91 00:06:53.844 PCIE (0000:00:12.0) NSID 3 from core 0: 19377.52 227.08 6557.35 5522.62 22645.56 00:06:53.844 ======================================================== 00:06:53.844 Total : 116265.15 1362.48 6586.19 5442.48 31119.38 00:06:53.844 00:06:53.844 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:53.844 ================================================================================= 00:06:53.844 1.00000% : 5545.354us 00:06:53.844 10.00000% : 5747.003us 00:06:53.844 25.00000% : 5948.652us 00:06:53.844 50.00000% : 6276.332us 00:06:53.844 75.00000% : 6604.012us 00:06:53.844 90.00000% : 7612.258us 00:06:53.844 95.00000% : 8973.391us 00:06:53.844 98.00000% : 10032.049us 00:06:53.844 99.00000% : 11090.708us 00:06:53.844 99.50000% : 24802.855us 00:06:53.844 99.90000% : 30650.683us 00:06:53.844 99.99000% : 31255.631us 00:06:53.844 99.99900% : 31255.631us 00:06:53.844 99.99990% : 31255.631us 00:06:53.844 99.99999% : 31255.631us 00:06:53.844 00:06:53.844 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:53.844 ================================================================================= 00:06:53.844 1.00000% : 5620.972us 00:06:53.844 10.00000% : 5797.415us 00:06:53.844 25.00000% : 5973.858us 00:06:53.844 50.00000% : 6251.126us 00:06:53.844 75.00000% : 6553.600us 00:06:53.844 90.00000% : 7561.846us 00:06:53.844 95.00000% : 8973.391us 00:06:53.844 98.00000% : 9880.812us 00:06:53.844 99.00000% : 11342.769us 00:06:53.844 99.50000% : 23290.486us 00:06:53.844 99.90000% : 29239.138us 00:06:53.844 99.99000% : 29642.437us 00:06:53.844 99.99900% : 29642.437us 00:06:53.845 99.99990% : 29642.437us 00:06:53.845 99.99999% : 29642.437us 00:06:53.845 00:06:53.845 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:53.845 ================================================================================= 00:06:53.845 1.00000% : 5620.972us 00:06:53.845 10.00000% : 5797.415us 00:06:53.845 25.00000% : 5973.858us 00:06:53.845 50.00000% : 6251.126us 00:06:53.845 75.00000% : 6553.600us 00:06:53.845 90.00000% : 7561.846us 00:06:53.845 95.00000% : 8872.566us 00:06:53.845 98.00000% : 10032.049us 00:06:53.845 99.00000% : 11393.182us 00:06:53.845 99.50000% : 21778.117us 00:06:53.845 99.90000% : 27625.945us 00:06:53.845 99.99000% : 28029.243us 00:06:53.845 99.99900% : 28230.892us 00:06:53.845 99.99990% : 28230.892us 00:06:53.845 99.99999% : 28230.892us 00:06:53.845 00:06:53.845 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:53.845 ================================================================================= 00:06:53.845 1.00000% : 5646.178us 00:06:53.845 10.00000% : 5797.415us 00:06:53.845 25.00000% : 5999.065us 00:06:53.845 50.00000% : 6251.126us 00:06:53.845 75.00000% : 6553.600us 00:06:53.845 90.00000% : 7612.258us 00:06:53.845 95.00000% : 8872.566us 00:06:53.845 98.00000% : 10032.049us 00:06:53.845 99.00000% : 11393.182us 00:06:53.845 99.50000% : 19862.449us 00:06:53.845 99.90000% : 25710.277us 00:06:53.845 99.99000% : 26416.049us 00:06:53.845 99.99900% : 26416.049us 00:06:53.845 99.99990% : 26416.049us 00:06:53.845 99.99999% : 26416.049us 00:06:53.845 00:06:53.845 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:53.845 ================================================================================= 00:06:53.845 1.00000% : 5620.972us 00:06:53.845 10.00000% : 5797.415us 00:06:53.845 25.00000% : 5999.065us 00:06:53.845 50.00000% : 6251.126us 00:06:53.845 75.00000% : 6553.600us 00:06:53.845 90.00000% : 7612.258us 00:06:53.845 95.00000% : 8922.978us 00:06:53.845 98.00000% : 10032.049us 00:06:53.845 99.00000% : 11040.295us 00:06:53.845 99.50000% : 18148.431us 00:06:53.845 99.90000% : 24097.083us 00:06:53.845 99.99000% : 24601.206us 00:06:53.845 99.99900% : 24601.206us 00:06:53.845 99.99990% : 24601.206us 00:06:53.845 99.99999% : 24601.206us 00:06:53.845 00:06:53.845 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:53.845 ================================================================================= 00:06:53.845 1.00000% : 5646.178us 00:06:53.845 10.00000% : 5797.415us 00:06:53.845 25.00000% : 5999.065us 00:06:53.845 50.00000% : 6251.126us 00:06:53.845 75.00000% : 6553.600us 00:06:53.845 90.00000% : 7662.671us 00:06:53.845 95.00000% : 8973.391us 00:06:53.845 98.00000% : 10082.462us 00:06:53.845 99.00000% : 11141.120us 00:06:53.845 99.50000% : 16232.763us 00:06:53.845 99.90000% : 22181.415us 00:06:53.845 99.99000% : 22685.538us 00:06:53.845 99.99900% : 22685.538us 00:06:53.845 99.99990% : 22685.538us 00:06:53.845 99.99999% : 22685.538us 00:06:53.845 00:06:53.845 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:53.845 ============================================================================== 00:06:53.845 Range in us Cumulative IO count 00:06:53.845 5419.323 - 5444.529: 0.0052% ( 1) 00:06:53.845 5444.529 - 5469.735: 0.0516% ( 9) 00:06:53.845 5469.735 - 5494.942: 0.2527% ( 39) 00:06:53.845 5494.942 - 5520.148: 0.6240% ( 72) 00:06:53.845 5520.148 - 5545.354: 1.1706% ( 106) 00:06:53.845 5545.354 - 5570.560: 1.8203% ( 126) 00:06:53.845 5570.560 - 5595.766: 2.6764% ( 166) 00:06:53.845 5595.766 - 5620.972: 3.8212% ( 222) 00:06:53.845 5620.972 - 5646.178: 4.8164% ( 193) 00:06:53.845 5646.178 - 5671.385: 6.2087% ( 270) 00:06:53.845 5671.385 - 5696.591: 7.6165% ( 273) 00:06:53.845 5696.591 - 5721.797: 9.2255% ( 312) 00:06:53.845 5721.797 - 5747.003: 10.9014% ( 325) 00:06:53.845 5747.003 - 5772.209: 12.5413% ( 318) 00:06:53.845 5772.209 - 5797.415: 14.2842% ( 338) 00:06:53.845 5797.415 - 5822.622: 16.0530% ( 343) 00:06:53.845 5822.622 - 5847.828: 17.9301% ( 364) 00:06:53.845 5847.828 - 5873.034: 19.7143% ( 346) 00:06:53.845 5873.034 - 5898.240: 21.5656% ( 359) 00:06:53.845 5898.240 - 5923.446: 23.4942% ( 374) 00:06:53.845 5923.446 - 5948.652: 25.5363% ( 396) 00:06:53.845 5948.652 - 5973.858: 27.3412% ( 350) 00:06:53.845 5973.858 - 5999.065: 29.3987% ( 399) 00:06:53.845 5999.065 - 6024.271: 31.3067% ( 370) 00:06:53.845 6024.271 - 6049.477: 33.3952% ( 405) 00:06:53.845 6049.477 - 6074.683: 35.3238% ( 374) 00:06:53.845 6074.683 - 6099.889: 37.5309% ( 428) 00:06:53.845 6099.889 - 6125.095: 39.4957% ( 381) 00:06:53.845 6125.095 - 6150.302: 41.5945% ( 407) 00:06:53.845 6150.302 - 6175.508: 43.6056% ( 390) 00:06:53.845 6175.508 - 6200.714: 45.7250% ( 411) 00:06:53.845 6200.714 - 6225.920: 47.7568% ( 394) 00:06:53.845 6225.920 - 6251.126: 49.8247% ( 401) 00:06:53.845 6251.126 - 6276.332: 51.9596% ( 414) 00:06:53.845 6276.332 - 6301.538: 54.0687% ( 409) 00:06:53.845 6301.538 - 6326.745: 56.1211% ( 398) 00:06:53.845 6326.745 - 6351.951: 58.1632% ( 396) 00:06:53.845 6351.951 - 6377.157: 60.2671% ( 408) 00:06:53.845 6377.157 - 6402.363: 62.3917% ( 412) 00:06:53.845 6402.363 - 6427.569: 64.6710% ( 442) 00:06:53.845 6427.569 - 6452.775: 66.6357% ( 381) 00:06:53.845 6452.775 - 6503.188: 70.6580% ( 780) 00:06:53.845 6503.188 - 6553.600: 74.1749% ( 682) 00:06:53.845 6553.600 - 6604.012: 77.0472% ( 557) 00:06:53.845 6604.012 - 6654.425: 79.3884% ( 454) 00:06:53.845 6654.425 - 6704.837: 81.0747% ( 327) 00:06:53.845 6704.837 - 6755.249: 82.4567% ( 268) 00:06:53.845 6755.249 - 6805.662: 83.5344% ( 209) 00:06:53.845 6805.662 - 6856.074: 84.4111% ( 170) 00:06:53.845 6856.074 - 6906.486: 85.1795% ( 149) 00:06:53.845 6906.486 - 6956.898: 85.8292% ( 126) 00:06:53.845 6956.898 - 7007.311: 86.4274% ( 116) 00:06:53.845 7007.311 - 7057.723: 86.9224% ( 96) 00:06:53.845 7057.723 - 7108.135: 87.3350% ( 80) 00:06:53.845 7108.135 - 7158.548: 87.7114% ( 73) 00:06:53.845 7158.548 - 7208.960: 88.0363% ( 63) 00:06:53.845 7208.960 - 7259.372: 88.3612% ( 63) 00:06:53.845 7259.372 - 7309.785: 88.6396% ( 54) 00:06:53.845 7309.785 - 7360.197: 88.9078% ( 52) 00:06:53.845 7360.197 - 7410.609: 89.2120% ( 59) 00:06:53.845 7410.609 - 7461.022: 89.4338% ( 43) 00:06:53.845 7461.022 - 7511.434: 89.6813% ( 48) 00:06:53.846 7511.434 - 7561.846: 89.9134% ( 45) 00:06:53.846 7561.846 - 7612.258: 90.1454% ( 45) 00:06:53.846 7612.258 - 7662.671: 90.3156% ( 33) 00:06:53.846 7662.671 - 7713.083: 90.4858% ( 33) 00:06:53.846 7713.083 - 7763.495: 90.6405% ( 30) 00:06:53.846 7763.495 - 7813.908: 90.8674% ( 44) 00:06:53.846 7813.908 - 7864.320: 91.0221% ( 30) 00:06:53.846 7864.320 - 7914.732: 91.2490% ( 44) 00:06:53.846 7914.732 - 7965.145: 91.4191% ( 33) 00:06:53.846 7965.145 - 8015.557: 91.6667% ( 48) 00:06:53.846 8015.557 - 8065.969: 91.8523% ( 36) 00:06:53.846 8065.969 - 8116.382: 92.0534% ( 39) 00:06:53.846 8116.382 - 8166.794: 92.2133% ( 31) 00:06:53.846 8166.794 - 8217.206: 92.3680% ( 30) 00:06:53.846 8217.206 - 8267.618: 92.5227% ( 30) 00:06:53.846 8267.618 - 8318.031: 92.6619% ( 27) 00:06:53.846 8318.031 - 8368.443: 92.8269% ( 32) 00:06:53.846 8368.443 - 8418.855: 93.0074% ( 35) 00:06:53.846 8418.855 - 8469.268: 93.1931% ( 36) 00:06:53.846 8469.268 - 8519.680: 93.3632% ( 33) 00:06:53.846 8519.680 - 8570.092: 93.5592% ( 38) 00:06:53.846 8570.092 - 8620.505: 93.7500% ( 37) 00:06:53.846 8620.505 - 8670.917: 93.8995% ( 29) 00:06:53.846 8670.917 - 8721.329: 94.1007% ( 39) 00:06:53.846 8721.329 - 8771.742: 94.3018% ( 39) 00:06:53.846 8771.742 - 8822.154: 94.5029% ( 39) 00:06:53.846 8822.154 - 8872.566: 94.7298% ( 44) 00:06:53.846 8872.566 - 8922.978: 94.9154% ( 36) 00:06:53.846 8922.978 - 8973.391: 95.1372% ( 43) 00:06:53.846 8973.391 - 9023.803: 95.3331% ( 38) 00:06:53.846 9023.803 - 9074.215: 95.4981% ( 32) 00:06:53.846 9074.215 - 9124.628: 95.7354% ( 46) 00:06:53.846 9124.628 - 9175.040: 95.9210% ( 36) 00:06:53.846 9175.040 - 9225.452: 96.0963% ( 34) 00:06:53.846 9225.452 - 9275.865: 96.2562% ( 31) 00:06:53.846 9275.865 - 9326.277: 96.4264% ( 33) 00:06:53.846 9326.277 - 9376.689: 96.5914% ( 32) 00:06:53.846 9376.689 - 9427.102: 96.7306% ( 27) 00:06:53.846 9427.102 - 9477.514: 96.8492% ( 23) 00:06:53.846 9477.514 - 9527.926: 96.9678% ( 23) 00:06:53.846 9527.926 - 9578.338: 97.0916% ( 24) 00:06:53.846 9578.338 - 9628.751: 97.2257% ( 26) 00:06:53.846 9628.751 - 9679.163: 97.3752% ( 29) 00:06:53.846 9679.163 - 9729.575: 97.4732% ( 19) 00:06:53.846 9729.575 - 9779.988: 97.6124% ( 27) 00:06:53.846 9779.988 - 9830.400: 97.7362% ( 24) 00:06:53.846 9830.400 - 9880.812: 97.8187% ( 16) 00:06:53.846 9880.812 - 9931.225: 97.9115% ( 18) 00:06:53.846 9931.225 - 9981.637: 97.9734% ( 12) 00:06:53.846 9981.637 - 10032.049: 98.0611% ( 17) 00:06:53.846 10032.049 - 10082.462: 98.1229% ( 12) 00:06:53.846 10082.462 - 10132.874: 98.1848% ( 12) 00:06:53.846 10132.874 - 10183.286: 98.2312% ( 9) 00:06:53.846 10183.286 - 10233.698: 98.2725% ( 8) 00:06:53.846 10233.698 - 10284.111: 98.3137% ( 8) 00:06:53.846 10284.111 - 10334.523: 98.3550% ( 8) 00:06:53.846 10334.523 - 10384.935: 98.4066% ( 10) 00:06:53.846 10384.935 - 10435.348: 98.4581% ( 10) 00:06:53.846 10435.348 - 10485.760: 98.4994% ( 8) 00:06:53.846 10485.760 - 10536.172: 98.5613% ( 12) 00:06:53.846 10536.172 - 10586.585: 98.6077% ( 9) 00:06:53.846 10586.585 - 10636.997: 98.6592% ( 10) 00:06:53.846 10636.997 - 10687.409: 98.7108% ( 10) 00:06:53.846 10687.409 - 10737.822: 98.7572% ( 9) 00:06:53.846 10737.822 - 10788.234: 98.7933% ( 7) 00:06:53.846 10788.234 - 10838.646: 98.8294% ( 7) 00:06:53.846 10838.646 - 10889.058: 98.8758% ( 9) 00:06:53.846 10889.058 - 10939.471: 98.9171% ( 8) 00:06:53.846 10939.471 - 10989.883: 98.9583% ( 8) 00:06:53.846 10989.883 - 11040.295: 98.9996% ( 8) 00:06:53.846 11040.295 - 11090.708: 99.0254% ( 5) 00:06:53.846 11090.708 - 11141.120: 99.0512% ( 5) 00:06:53.846 11141.120 - 11191.532: 99.0769% ( 5) 00:06:53.846 11191.532 - 11241.945: 99.1130% ( 7) 00:06:53.846 11241.945 - 11292.357: 99.1337% ( 4) 00:06:53.846 11292.357 - 11342.769: 99.1646% ( 6) 00:06:53.846 11342.769 - 11393.182: 99.1801% ( 3) 00:06:53.846 11393.182 - 11443.594: 99.2059% ( 5) 00:06:53.846 11443.594 - 11494.006: 99.2316% ( 5) 00:06:53.846 11494.006 - 11544.418: 99.2471% ( 3) 00:06:53.846 11544.418 - 11594.831: 99.2523% ( 1) 00:06:53.846 11594.831 - 11645.243: 99.2677% ( 3) 00:06:53.846 11645.243 - 11695.655: 99.2781% ( 2) 00:06:53.846 11695.655 - 11746.068: 99.2884% ( 2) 00:06:53.846 11746.068 - 11796.480: 99.2987% ( 2) 00:06:53.846 11796.480 - 11846.892: 99.3090% ( 2) 00:06:53.846 11846.892 - 11897.305: 99.3193% ( 2) 00:06:53.846 11897.305 - 11947.717: 99.3296% ( 2) 00:06:53.846 11947.717 - 11998.129: 99.3399% ( 2) 00:06:53.846 23693.785 - 23794.609: 99.3502% ( 2) 00:06:53.846 23794.609 - 23895.434: 99.3709% ( 4) 00:06:53.846 23895.434 - 23996.258: 99.3863% ( 3) 00:06:53.846 23996.258 - 24097.083: 99.4018% ( 3) 00:06:53.846 24097.083 - 24197.908: 99.4224% ( 4) 00:06:53.846 24197.908 - 24298.732: 99.4379% ( 3) 00:06:53.846 24298.732 - 24399.557: 99.4534% ( 3) 00:06:53.846 24399.557 - 24500.382: 99.4740% ( 4) 00:06:53.846 24500.382 - 24601.206: 99.4895% ( 3) 00:06:53.846 24601.206 - 24702.031: 99.4998% ( 2) 00:06:53.846 24702.031 - 24802.855: 99.5204% ( 4) 00:06:53.846 24802.855 - 24903.680: 99.5359% ( 3) 00:06:53.846 24903.680 - 25004.505: 99.5565% ( 4) 00:06:53.846 25004.505 - 25105.329: 99.5771% ( 4) 00:06:53.846 25105.329 - 25206.154: 99.5926% ( 3) 00:06:53.846 25206.154 - 25306.978: 99.6132% ( 4) 00:06:53.846 25306.978 - 25407.803: 99.6287% ( 3) 00:06:53.846 25407.803 - 25508.628: 99.6493% ( 4) 00:06:53.846 25508.628 - 25609.452: 99.6700% ( 4) 00:06:53.846 29037.489 - 29239.138: 99.6906% ( 4) 00:06:53.846 29239.138 - 29440.788: 99.7318% ( 8) 00:06:53.846 29440.788 - 29642.437: 99.7628% ( 6) 00:06:53.846 29642.437 - 29844.086: 99.7886% ( 5) 00:06:53.846 29844.086 - 30045.735: 99.8298% ( 8) 00:06:53.846 30045.735 - 30247.385: 99.8505% ( 4) 00:06:53.846 30247.385 - 30449.034: 99.8917% ( 8) 00:06:53.846 30449.034 - 30650.683: 99.9175% ( 5) 00:06:53.846 30650.683 - 30852.332: 99.9536% ( 7) 00:06:53.846 30852.332 - 31053.982: 99.9897% ( 7) 00:06:53.846 31053.982 - 31255.631: 100.0000% ( 2) 00:06:53.846 00:06:53.846 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:53.846 ============================================================================== 00:06:53.846 Range in us Cumulative IO count 00:06:53.846 5494.942 - 5520.148: 0.0155% ( 3) 00:06:53.846 5520.148 - 5545.354: 0.0670% ( 10) 00:06:53.846 5545.354 - 5570.560: 0.2011% ( 26) 00:06:53.846 5570.560 - 5595.766: 0.5157% ( 61) 00:06:53.846 5595.766 - 5620.972: 1.0417% ( 102) 00:06:53.847 5620.972 - 5646.178: 1.6863% ( 125) 00:06:53.847 5646.178 - 5671.385: 2.6248% ( 182) 00:06:53.847 5671.385 - 5696.591: 3.7799% ( 224) 00:06:53.847 5696.591 - 5721.797: 5.0949% ( 255) 00:06:53.847 5721.797 - 5747.003: 6.6832% ( 308) 00:06:53.847 5747.003 - 5772.209: 8.2611% ( 306) 00:06:53.847 5772.209 - 5797.415: 10.0454% ( 346) 00:06:53.847 5797.415 - 5822.622: 12.0050% ( 380) 00:06:53.847 5822.622 - 5847.828: 13.9800% ( 383) 00:06:53.847 5847.828 - 5873.034: 16.0066% ( 393) 00:06:53.847 5873.034 - 5898.240: 18.1879% ( 423) 00:06:53.847 5898.240 - 5923.446: 20.4517% ( 439) 00:06:53.847 5923.446 - 5948.652: 22.7774% ( 451) 00:06:53.847 5948.652 - 5973.858: 25.0825% ( 447) 00:06:53.847 5973.858 - 5999.065: 27.3412% ( 438) 00:06:53.847 5999.065 - 6024.271: 29.6514% ( 448) 00:06:53.847 6024.271 - 6049.477: 32.0080% ( 457) 00:06:53.847 6049.477 - 6074.683: 34.3389% ( 452) 00:06:53.847 6074.683 - 6099.889: 36.7162% ( 461) 00:06:53.847 6099.889 - 6125.095: 39.0470% ( 452) 00:06:53.847 6125.095 - 6150.302: 41.4501% ( 466) 00:06:53.847 6150.302 - 6175.508: 43.9099% ( 477) 00:06:53.847 6175.508 - 6200.714: 46.3232% ( 468) 00:06:53.847 6200.714 - 6225.920: 48.7005% ( 461) 00:06:53.847 6225.920 - 6251.126: 51.1964% ( 484) 00:06:53.847 6251.126 - 6276.332: 53.7180% ( 489) 00:06:53.847 6276.332 - 6301.538: 56.1623% ( 474) 00:06:53.847 6301.538 - 6326.745: 58.6531% ( 483) 00:06:53.847 6326.745 - 6351.951: 61.1283% ( 480) 00:06:53.847 6351.951 - 6377.157: 63.5726% ( 474) 00:06:53.847 6377.157 - 6402.363: 66.0736% ( 485) 00:06:53.847 6402.363 - 6427.569: 68.3168% ( 435) 00:06:53.847 6427.569 - 6452.775: 70.3280% ( 390) 00:06:53.847 6452.775 - 6503.188: 73.9996% ( 712) 00:06:53.847 6503.188 - 6553.600: 76.8719% ( 557) 00:06:53.847 6553.600 - 6604.012: 79.1151% ( 435) 00:06:53.847 6604.012 - 6654.425: 80.7653% ( 320) 00:06:53.847 6654.425 - 6704.837: 82.1112% ( 261) 00:06:53.847 6704.837 - 6755.249: 83.1941% ( 210) 00:06:53.847 6755.249 - 6805.662: 84.0914% ( 174) 00:06:53.847 6805.662 - 6856.074: 84.8391% ( 145) 00:06:53.847 6856.074 - 6906.486: 85.4837% ( 125) 00:06:53.847 6906.486 - 6956.898: 86.0716% ( 114) 00:06:53.847 6956.898 - 7007.311: 86.5924% ( 101) 00:06:53.847 7007.311 - 7057.723: 87.0875% ( 96) 00:06:53.847 7057.723 - 7108.135: 87.5567% ( 91) 00:06:53.847 7108.135 - 7158.548: 87.9899% ( 84) 00:06:53.847 7158.548 - 7208.960: 88.3663% ( 73) 00:06:53.847 7208.960 - 7259.372: 88.7067% ( 66) 00:06:53.847 7259.372 - 7309.785: 88.9955% ( 56) 00:06:53.847 7309.785 - 7360.197: 89.2533% ( 50) 00:06:53.847 7360.197 - 7410.609: 89.5111% ( 50) 00:06:53.847 7410.609 - 7461.022: 89.7741% ( 51) 00:06:53.847 7461.022 - 7511.434: 89.9907% ( 42) 00:06:53.847 7511.434 - 7561.846: 90.1712% ( 35) 00:06:53.847 7561.846 - 7612.258: 90.3311% ( 31) 00:06:53.847 7612.258 - 7662.671: 90.5322% ( 39) 00:06:53.847 7662.671 - 7713.083: 90.7127% ( 35) 00:06:53.847 7713.083 - 7763.495: 90.8880% ( 34) 00:06:53.847 7763.495 - 7813.908: 91.0530% ( 32) 00:06:53.847 7813.908 - 7864.320: 91.2232% ( 33) 00:06:53.847 7864.320 - 7914.732: 91.3676% ( 28) 00:06:53.847 7914.732 - 7965.145: 91.5068% ( 27) 00:06:53.847 7965.145 - 8015.557: 91.6460% ( 27) 00:06:53.847 8015.557 - 8065.969: 91.7801% ( 26) 00:06:53.847 8065.969 - 8116.382: 91.8987% ( 23) 00:06:53.847 8116.382 - 8166.794: 92.0431% ( 28) 00:06:53.847 8166.794 - 8217.206: 92.1823% ( 27) 00:06:53.847 8217.206 - 8267.618: 92.3474% ( 32) 00:06:53.847 8267.618 - 8318.031: 92.5433% ( 38) 00:06:53.847 8318.031 - 8368.443: 92.7238% ( 35) 00:06:53.847 8368.443 - 8418.855: 92.9198% ( 38) 00:06:53.847 8418.855 - 8469.268: 93.1054% ( 36) 00:06:53.847 8469.268 - 8519.680: 93.2756% ( 33) 00:06:53.847 8519.680 - 8570.092: 93.4509% ( 34) 00:06:53.847 8570.092 - 8620.505: 93.6314% ( 35) 00:06:53.847 8620.505 - 8670.917: 93.7913% ( 31) 00:06:53.847 8670.917 - 8721.329: 93.9460% ( 30) 00:06:53.847 8721.329 - 8771.742: 94.1522% ( 40) 00:06:53.847 8771.742 - 8822.154: 94.3637% ( 41) 00:06:53.847 8822.154 - 8872.566: 94.6215% ( 50) 00:06:53.847 8872.566 - 8922.978: 94.8896% ( 52) 00:06:53.847 8922.978 - 8973.391: 95.1630% ( 53) 00:06:53.847 8973.391 - 9023.803: 95.4002% ( 46) 00:06:53.847 9023.803 - 9074.215: 95.6271% ( 44) 00:06:53.847 9074.215 - 9124.628: 95.8282% ( 39) 00:06:53.847 9124.628 - 9175.040: 96.0499% ( 43) 00:06:53.847 9175.040 - 9225.452: 96.2665% ( 42) 00:06:53.847 9225.452 - 9275.865: 96.4831% ( 42) 00:06:53.847 9275.865 - 9326.277: 96.6842% ( 39) 00:06:53.847 9326.277 - 9376.689: 96.8802% ( 38) 00:06:53.847 9376.689 - 9427.102: 97.0813% ( 39) 00:06:53.847 9427.102 - 9477.514: 97.2824% ( 39) 00:06:53.847 9477.514 - 9527.926: 97.4474% ( 32) 00:06:53.847 9527.926 - 9578.338: 97.5969% ( 29) 00:06:53.847 9578.338 - 9628.751: 97.7052% ( 21) 00:06:53.847 9628.751 - 9679.163: 97.8032% ( 19) 00:06:53.847 9679.163 - 9729.575: 97.8548% ( 10) 00:06:53.847 9729.575 - 9779.988: 97.9115% ( 11) 00:06:53.847 9779.988 - 9830.400: 97.9785% ( 13) 00:06:53.847 9830.400 - 9880.812: 98.0404% ( 12) 00:06:53.847 9880.812 - 9931.225: 98.0972% ( 11) 00:06:53.847 9931.225 - 9981.637: 98.1590% ( 12) 00:06:53.847 9981.637 - 10032.049: 98.1951% ( 7) 00:06:53.847 10032.049 - 10082.462: 98.2261% ( 6) 00:06:53.847 10082.462 - 10132.874: 98.2673% ( 8) 00:06:53.847 10132.874 - 10183.286: 98.2983% ( 6) 00:06:53.847 10183.286 - 10233.698: 98.3550% ( 11) 00:06:53.847 10233.698 - 10284.111: 98.3808% ( 5) 00:06:53.847 10284.111 - 10334.523: 98.4220% ( 8) 00:06:53.847 10334.523 - 10384.935: 98.4581% ( 7) 00:06:53.847 10384.935 - 10435.348: 98.4942% ( 7) 00:06:53.847 10435.348 - 10485.760: 98.5355% ( 8) 00:06:53.847 10485.760 - 10536.172: 98.5664% ( 6) 00:06:53.847 10536.172 - 10586.585: 98.6128% ( 9) 00:06:53.847 10586.585 - 10636.997: 98.6489% ( 7) 00:06:53.847 10636.997 - 10687.409: 98.6902% ( 8) 00:06:53.847 10687.409 - 10737.822: 98.7211% ( 6) 00:06:53.847 10737.822 - 10788.234: 98.7572% ( 7) 00:06:53.847 10788.234 - 10838.646: 98.7985% ( 8) 00:06:53.847 10838.646 - 10889.058: 98.8294% ( 6) 00:06:53.847 10889.058 - 10939.471: 98.8604% ( 6) 00:06:53.847 10939.471 - 10989.883: 98.8758% ( 3) 00:06:53.847 10989.883 - 11040.295: 98.8913% ( 3) 00:06:53.847 11040.295 - 11090.708: 98.9119% ( 4) 00:06:53.848 11090.708 - 11141.120: 98.9325% ( 4) 00:06:53.848 11141.120 - 11191.532: 98.9583% ( 5) 00:06:53.848 11191.532 - 11241.945: 98.9790% ( 4) 00:06:53.848 11241.945 - 11292.357: 98.9996% ( 4) 00:06:53.848 11292.357 - 11342.769: 99.0305% ( 6) 00:06:53.848 11342.769 - 11393.182: 99.0512% ( 4) 00:06:53.848 11393.182 - 11443.594: 99.0718% ( 4) 00:06:53.848 11443.594 - 11494.006: 99.0924% ( 4) 00:06:53.848 11494.006 - 11544.418: 99.1079% ( 3) 00:06:53.848 11544.418 - 11594.831: 99.1233% ( 3) 00:06:53.848 11594.831 - 11645.243: 99.1337% ( 2) 00:06:53.848 11645.243 - 11695.655: 99.1491% ( 3) 00:06:53.848 11695.655 - 11746.068: 99.1646% ( 3) 00:06:53.848 11746.068 - 11796.480: 99.1852% ( 4) 00:06:53.848 11796.480 - 11846.892: 99.2007% ( 3) 00:06:53.848 11846.892 - 11897.305: 99.2162% ( 3) 00:06:53.848 11897.305 - 11947.717: 99.2316% ( 3) 00:06:53.848 11947.717 - 11998.129: 99.2471% ( 3) 00:06:53.848 11998.129 - 12048.542: 99.2626% ( 3) 00:06:53.848 12048.542 - 12098.954: 99.2781% ( 3) 00:06:53.848 12098.954 - 12149.366: 99.2935% ( 3) 00:06:53.848 12149.366 - 12199.778: 99.3038% ( 2) 00:06:53.848 12199.778 - 12250.191: 99.3193% ( 3) 00:06:53.848 12250.191 - 12300.603: 99.3348% ( 3) 00:06:53.848 12300.603 - 12351.015: 99.3399% ( 1) 00:06:53.848 22282.240 - 22383.065: 99.3451% ( 1) 00:06:53.848 22383.065 - 22483.889: 99.3606% ( 3) 00:06:53.848 22483.889 - 22584.714: 99.3812% ( 4) 00:06:53.848 22584.714 - 22685.538: 99.4018% ( 4) 00:06:53.848 22685.538 - 22786.363: 99.4224% ( 4) 00:06:53.848 22786.363 - 22887.188: 99.4431% ( 4) 00:06:53.848 22887.188 - 22988.012: 99.4585% ( 3) 00:06:53.848 22988.012 - 23088.837: 99.4792% ( 4) 00:06:53.848 23088.837 - 23189.662: 99.4998% ( 4) 00:06:53.848 23189.662 - 23290.486: 99.5204% ( 4) 00:06:53.848 23290.486 - 23391.311: 99.5359% ( 3) 00:06:53.848 23391.311 - 23492.135: 99.5565% ( 4) 00:06:53.848 23492.135 - 23592.960: 99.5720% ( 3) 00:06:53.848 23592.960 - 23693.785: 99.5926% ( 4) 00:06:53.848 23693.785 - 23794.609: 99.6132% ( 4) 00:06:53.848 23794.609 - 23895.434: 99.6339% ( 4) 00:06:53.848 23895.434 - 23996.258: 99.6545% ( 4) 00:06:53.848 23996.258 - 24097.083: 99.6700% ( 3) 00:06:53.848 27625.945 - 27827.594: 99.6803% ( 2) 00:06:53.848 27827.594 - 28029.243: 99.7112% ( 6) 00:06:53.848 28029.243 - 28230.892: 99.7473% ( 7) 00:06:53.848 28230.892 - 28432.542: 99.7834% ( 7) 00:06:53.848 28432.542 - 28634.191: 99.8247% ( 8) 00:06:53.848 28634.191 - 28835.840: 99.8608% ( 7) 00:06:53.848 28835.840 - 29037.489: 99.8969% ( 7) 00:06:53.848 29037.489 - 29239.138: 99.9381% ( 8) 00:06:53.848 29239.138 - 29440.788: 99.9742% ( 7) 00:06:53.848 29440.788 - 29642.437: 100.0000% ( 5) 00:06:53.848 00:06:53.848 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:53.848 ============================================================================== 00:06:53.848 Range in us Cumulative IO count 00:06:53.848 5494.942 - 5520.148: 0.0155% ( 3) 00:06:53.848 5520.148 - 5545.354: 0.0516% ( 7) 00:06:53.848 5545.354 - 5570.560: 0.1599% ( 21) 00:06:53.848 5570.560 - 5595.766: 0.5105% ( 68) 00:06:53.848 5595.766 - 5620.972: 1.0417% ( 103) 00:06:53.848 5620.972 - 5646.178: 1.6759% ( 123) 00:06:53.848 5646.178 - 5671.385: 2.6145% ( 182) 00:06:53.848 5671.385 - 5696.591: 3.7490% ( 220) 00:06:53.848 5696.591 - 5721.797: 4.9505% ( 233) 00:06:53.848 5721.797 - 5747.003: 6.5491% ( 310) 00:06:53.848 5747.003 - 5772.209: 8.2508% ( 330) 00:06:53.848 5772.209 - 5797.415: 10.1330% ( 365) 00:06:53.848 5797.415 - 5822.622: 12.1493% ( 391) 00:06:53.848 5822.622 - 5847.828: 14.2688% ( 411) 00:06:53.848 5847.828 - 5873.034: 16.4140% ( 416) 00:06:53.848 5873.034 - 5898.240: 18.5334% ( 411) 00:06:53.848 5898.240 - 5923.446: 20.7405% ( 428) 00:06:53.848 5923.446 - 5948.652: 22.9785% ( 434) 00:06:53.848 5948.652 - 5973.858: 25.2527% ( 441) 00:06:53.848 5973.858 - 5999.065: 27.6196% ( 459) 00:06:53.848 5999.065 - 6024.271: 29.9608% ( 454) 00:06:53.848 6024.271 - 6049.477: 32.3639% ( 466) 00:06:53.848 6049.477 - 6074.683: 34.7050% ( 454) 00:06:53.848 6074.683 - 6099.889: 37.0668% ( 458) 00:06:53.848 6099.889 - 6125.095: 39.4544% ( 463) 00:06:53.848 6125.095 - 6150.302: 41.8523% ( 465) 00:06:53.848 6150.302 - 6175.508: 44.2450% ( 464) 00:06:53.848 6175.508 - 6200.714: 46.7306% ( 482) 00:06:53.848 6200.714 - 6225.920: 49.1749% ( 474) 00:06:53.848 6225.920 - 6251.126: 51.6192% ( 474) 00:06:53.848 6251.126 - 6276.332: 54.0532% ( 472) 00:06:53.848 6276.332 - 6301.538: 56.4666% ( 468) 00:06:53.848 6301.538 - 6326.745: 58.9470% ( 481) 00:06:53.848 6326.745 - 6351.951: 61.4325% ( 482) 00:06:53.848 6351.951 - 6377.157: 63.8975% ( 478) 00:06:53.848 6377.157 - 6402.363: 66.2954% ( 465) 00:06:53.848 6402.363 - 6427.569: 68.5283% ( 433) 00:06:53.848 6427.569 - 6452.775: 70.6528% ( 412) 00:06:53.848 6452.775 - 6503.188: 74.3296% ( 713) 00:06:53.848 6503.188 - 6553.600: 77.2535% ( 567) 00:06:53.848 6553.600 - 6604.012: 79.3575% ( 408) 00:06:53.848 6604.012 - 6654.425: 80.9097% ( 301) 00:06:53.848 6654.425 - 6704.837: 82.1215% ( 235) 00:06:53.848 6704.837 - 6755.249: 83.1271% ( 195) 00:06:53.848 6755.249 - 6805.662: 84.0140% ( 172) 00:06:53.848 6805.662 - 6856.074: 84.8133% ( 155) 00:06:53.848 6856.074 - 6906.486: 85.4734% ( 128) 00:06:53.848 6906.486 - 6956.898: 86.0303% ( 108) 00:06:53.848 6956.898 - 7007.311: 86.5254% ( 96) 00:06:53.848 7007.311 - 7057.723: 87.0101% ( 94) 00:06:53.848 7057.723 - 7108.135: 87.4330% ( 82) 00:06:53.848 7108.135 - 7158.548: 87.8558% ( 82) 00:06:53.848 7158.548 - 7208.960: 88.1962% ( 66) 00:06:53.848 7208.960 - 7259.372: 88.5417% ( 67) 00:06:53.848 7259.372 - 7309.785: 88.8253% ( 55) 00:06:53.848 7309.785 - 7360.197: 89.1089% ( 55) 00:06:53.848 7360.197 - 7410.609: 89.3771% ( 52) 00:06:53.848 7410.609 - 7461.022: 89.6504% ( 53) 00:06:53.848 7461.022 - 7511.434: 89.8618% ( 41) 00:06:53.848 7511.434 - 7561.846: 90.0578% ( 38) 00:06:53.848 7561.846 - 7612.258: 90.2176% ( 31) 00:06:53.848 7612.258 - 7662.671: 90.3568% ( 27) 00:06:53.848 7662.671 - 7713.083: 90.5476% ( 37) 00:06:53.848 7713.083 - 7763.495: 90.7178% ( 33) 00:06:53.848 7763.495 - 7813.908: 90.8416% ( 24) 00:06:53.848 7813.908 - 7864.320: 91.0014% ( 31) 00:06:53.848 7864.320 - 7914.732: 91.1355% ( 26) 00:06:53.848 7914.732 - 7965.145: 91.2748% ( 27) 00:06:53.848 7965.145 - 8015.557: 91.4191% ( 28) 00:06:53.848 8015.557 - 8065.969: 91.5532% ( 26) 00:06:53.848 8065.969 - 8116.382: 91.7028% ( 29) 00:06:53.848 8116.382 - 8166.794: 91.8833% ( 35) 00:06:53.848 8166.794 - 8217.206: 92.0947% ( 41) 00:06:53.848 8217.206 - 8267.618: 92.2958% ( 39) 00:06:53.848 8267.618 - 8318.031: 92.5227% ( 44) 00:06:53.848 8318.031 - 8368.443: 92.7651% ( 47) 00:06:53.848 8368.443 - 8418.855: 93.0074% ( 47) 00:06:53.848 8418.855 - 8469.268: 93.2395% ( 45) 00:06:53.848 8469.268 - 8519.680: 93.4922% ( 49) 00:06:53.848 8519.680 - 8570.092: 93.7191% ( 44) 00:06:53.848 8570.092 - 8620.505: 93.9356% ( 42) 00:06:53.848 8620.505 - 8670.917: 94.1986% ( 51) 00:06:53.848 8670.917 - 8721.329: 94.4307% ( 45) 00:06:53.848 8721.329 - 8771.742: 94.6421% ( 41) 00:06:53.848 8771.742 - 8822.154: 94.8639% ( 43) 00:06:53.848 8822.154 - 8872.566: 95.0547% ( 37) 00:06:53.848 8872.566 - 8922.978: 95.2506% ( 38) 00:06:53.848 8922.978 - 8973.391: 95.4517% ( 39) 00:06:53.848 8973.391 - 9023.803: 95.6271% ( 34) 00:06:53.848 9023.803 - 9074.215: 95.8230% ( 38) 00:06:53.848 9074.215 - 9124.628: 95.9983% ( 34) 00:06:53.848 9124.628 - 9175.040: 96.1685% ( 33) 00:06:53.848 9175.040 - 9225.452: 96.3232% ( 30) 00:06:53.848 9225.452 - 9275.865: 96.4470% ( 24) 00:06:53.848 9275.865 - 9326.277: 96.6068% ( 31) 00:06:53.848 9326.277 - 9376.689: 96.7873% ( 35) 00:06:53.848 9376.689 - 9427.102: 96.9266% ( 27) 00:06:53.848 9427.102 - 9477.514: 97.0606% ( 26) 00:06:53.848 9477.514 - 9527.926: 97.1432% ( 16) 00:06:53.849 9527.926 - 9578.338: 97.2205% ( 15) 00:06:53.849 9578.338 - 9628.751: 97.3030% ( 16) 00:06:53.849 9628.751 - 9679.163: 97.3907% ( 17) 00:06:53.849 9679.163 - 9729.575: 97.4990% ( 21) 00:06:53.849 9729.575 - 9779.988: 97.5969% ( 19) 00:06:53.849 9779.988 - 9830.400: 97.6949% ( 19) 00:06:53.849 9830.400 - 9880.812: 97.7929% ( 19) 00:06:53.849 9880.812 - 9931.225: 97.8909% ( 19) 00:06:53.849 9931.225 - 9981.637: 97.9734% ( 16) 00:06:53.849 9981.637 - 10032.049: 98.0507% ( 15) 00:06:53.849 10032.049 - 10082.462: 98.1178% ( 13) 00:06:53.849 10082.462 - 10132.874: 98.1900% ( 14) 00:06:53.849 10132.874 - 10183.286: 98.2209% ( 6) 00:06:53.849 10183.286 - 10233.698: 98.2570% ( 7) 00:06:53.849 10233.698 - 10284.111: 98.3034% ( 9) 00:06:53.849 10284.111 - 10334.523: 98.3498% ( 9) 00:06:53.849 10334.523 - 10384.935: 98.4066% ( 11) 00:06:53.849 10384.935 - 10435.348: 98.4633% ( 11) 00:06:53.849 10435.348 - 10485.760: 98.5149% ( 10) 00:06:53.849 10485.760 - 10536.172: 98.5613% ( 9) 00:06:53.849 10536.172 - 10586.585: 98.5870% ( 5) 00:06:53.849 10586.585 - 10636.997: 98.6180% ( 6) 00:06:53.849 10636.997 - 10687.409: 98.6592% ( 8) 00:06:53.849 10687.409 - 10737.822: 98.6902% ( 6) 00:06:53.849 10737.822 - 10788.234: 98.7314% ( 8) 00:06:53.849 10788.234 - 10838.646: 98.7675% ( 7) 00:06:53.849 10838.646 - 10889.058: 98.8036% ( 7) 00:06:53.849 10889.058 - 10939.471: 98.8346% ( 6) 00:06:53.849 10939.471 - 10989.883: 98.8449% ( 2) 00:06:53.849 10989.883 - 11040.295: 98.8552% ( 2) 00:06:53.849 11040.295 - 11090.708: 98.8707% ( 3) 00:06:53.849 11090.708 - 11141.120: 98.8758% ( 1) 00:06:53.849 11141.120 - 11191.532: 98.9016% ( 5) 00:06:53.849 11191.532 - 11241.945: 98.9222% ( 4) 00:06:53.849 11241.945 - 11292.357: 98.9480% ( 5) 00:06:53.849 11292.357 - 11342.769: 98.9738% ( 5) 00:06:53.849 11342.769 - 11393.182: 99.0047% ( 6) 00:06:53.849 11393.182 - 11443.594: 99.0357% ( 6) 00:06:53.849 11443.594 - 11494.006: 99.0615% ( 5) 00:06:53.849 11494.006 - 11544.418: 99.0976% ( 7) 00:06:53.849 11544.418 - 11594.831: 99.1233% ( 5) 00:06:53.849 11594.831 - 11645.243: 99.1543% ( 6) 00:06:53.849 11645.243 - 11695.655: 99.1852% ( 6) 00:06:53.849 11695.655 - 11746.068: 99.2059% ( 4) 00:06:53.849 11746.068 - 11796.480: 99.2265% ( 4) 00:06:53.849 11796.480 - 11846.892: 99.2471% ( 4) 00:06:53.849 11846.892 - 11897.305: 99.2677% ( 4) 00:06:53.849 11897.305 - 11947.717: 99.2832% ( 3) 00:06:53.849 11947.717 - 11998.129: 99.3038% ( 4) 00:06:53.849 11998.129 - 12048.542: 99.3245% ( 4) 00:06:53.849 12048.542 - 12098.954: 99.3399% ( 3) 00:06:53.849 20669.046 - 20769.871: 99.3502% ( 2) 00:06:53.849 20769.871 - 20870.695: 99.3657% ( 3) 00:06:53.849 20870.695 - 20971.520: 99.3863% ( 4) 00:06:53.849 20971.520 - 21072.345: 99.3967% ( 2) 00:06:53.849 21072.345 - 21173.169: 99.4173% ( 4) 00:06:53.849 21173.169 - 21273.994: 99.4328% ( 3) 00:06:53.849 21273.994 - 21374.818: 99.4482% ( 3) 00:06:53.849 21374.818 - 21475.643: 99.4637% ( 3) 00:06:53.849 21475.643 - 21576.468: 99.4843% ( 4) 00:06:53.849 21576.468 - 21677.292: 99.4998% ( 3) 00:06:53.849 21677.292 - 21778.117: 99.5204% ( 4) 00:06:53.849 21778.117 - 21878.942: 99.5410% ( 4) 00:06:53.849 21878.942 - 21979.766: 99.5617% ( 4) 00:06:53.849 21979.766 - 22080.591: 99.5823% ( 4) 00:06:53.849 22080.591 - 22181.415: 99.5978% ( 3) 00:06:53.849 22181.415 - 22282.240: 99.6184% ( 4) 00:06:53.849 22282.240 - 22383.065: 99.6339% ( 3) 00:06:53.849 22383.065 - 22483.889: 99.6545% ( 4) 00:06:53.849 22483.889 - 22584.714: 99.6700% ( 3) 00:06:53.849 26214.400 - 26416.049: 99.6906% ( 4) 00:06:53.849 26416.049 - 26617.698: 99.7318% ( 8) 00:06:53.849 26617.698 - 26819.348: 99.7679% ( 7) 00:06:53.849 26819.348 - 27020.997: 99.8040% ( 7) 00:06:53.849 27020.997 - 27222.646: 99.8401% ( 7) 00:06:53.849 27222.646 - 27424.295: 99.8762% ( 7) 00:06:53.849 27424.295 - 27625.945: 99.9175% ( 8) 00:06:53.849 27625.945 - 27827.594: 99.9536% ( 7) 00:06:53.849 27827.594 - 28029.243: 99.9948% ( 8) 00:06:53.849 28029.243 - 28230.892: 100.0000% ( 1) 00:06:53.849 00:06:53.849 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:53.849 ============================================================================== 00:06:53.849 Range in us Cumulative IO count 00:06:53.849 5494.942 - 5520.148: 0.0155% ( 3) 00:06:53.849 5520.148 - 5545.354: 0.0361% ( 4) 00:06:53.849 5545.354 - 5570.560: 0.1238% ( 17) 00:06:53.849 5570.560 - 5595.766: 0.4280% ( 59) 00:06:53.849 5595.766 - 5620.972: 0.8921% ( 90) 00:06:53.849 5620.972 - 5646.178: 1.6295% ( 143) 00:06:53.849 5646.178 - 5671.385: 2.6196% ( 192) 00:06:53.849 5671.385 - 5696.591: 3.6561% ( 201) 00:06:53.849 5696.591 - 5721.797: 5.0330% ( 267) 00:06:53.849 5721.797 - 5747.003: 6.5594% ( 296) 00:06:53.849 5747.003 - 5772.209: 8.2354% ( 325) 00:06:53.849 5772.209 - 5797.415: 10.0557% ( 353) 00:06:53.849 5797.415 - 5822.622: 12.0101% ( 379) 00:06:53.849 5822.622 - 5847.828: 13.9645% ( 379) 00:06:53.849 5847.828 - 5873.034: 16.0427% ( 403) 00:06:53.849 5873.034 - 5898.240: 18.2292% ( 424) 00:06:53.849 5898.240 - 5923.446: 20.4517% ( 431) 00:06:53.849 5923.446 - 5948.652: 22.7052% ( 437) 00:06:53.849 5948.652 - 5973.858: 24.9433% ( 434) 00:06:53.849 5973.858 - 5999.065: 27.2741% ( 452) 00:06:53.849 5999.065 - 6024.271: 29.5998% ( 451) 00:06:53.849 6024.271 - 6049.477: 31.9565% ( 457) 00:06:53.849 6049.477 - 6074.683: 34.2564% ( 446) 00:06:53.849 6074.683 - 6099.889: 36.6698% ( 468) 00:06:53.849 6099.889 - 6125.095: 39.0677% ( 465) 00:06:53.849 6125.095 - 6150.302: 41.4604% ( 464) 00:06:53.849 6150.302 - 6175.508: 43.8995% ( 473) 00:06:53.849 6175.508 - 6200.714: 46.2974% ( 465) 00:06:53.849 6200.714 - 6225.920: 48.8088% ( 487) 00:06:53.849 6225.920 - 6251.126: 51.3201% ( 487) 00:06:53.849 6251.126 - 6276.332: 53.7490% ( 471) 00:06:53.849 6276.332 - 6301.538: 56.2191% ( 479) 00:06:53.849 6301.538 - 6326.745: 58.7459% ( 490) 00:06:53.849 6326.745 - 6351.951: 61.2005% ( 476) 00:06:53.849 6351.951 - 6377.157: 63.7067% ( 486) 00:06:53.850 6377.157 - 6402.363: 66.1200% ( 468) 00:06:53.850 6402.363 - 6427.569: 68.4303% ( 448) 00:06:53.850 6427.569 - 6452.775: 70.5703% ( 415) 00:06:53.850 6452.775 - 6503.188: 74.2471% ( 713) 00:06:53.850 6503.188 - 6553.600: 77.2948% ( 591) 00:06:53.850 6553.600 - 6604.012: 79.4503% ( 418) 00:06:53.850 6604.012 - 6654.425: 81.0592% ( 312) 00:06:53.850 6654.425 - 6704.837: 82.2814% ( 237) 00:06:53.850 6704.837 - 6755.249: 83.4416% ( 225) 00:06:53.850 6755.249 - 6805.662: 84.4317% ( 192) 00:06:53.850 6805.662 - 6856.074: 85.1898% ( 147) 00:06:53.850 6856.074 - 6906.486: 85.8911% ( 136) 00:06:53.850 6906.486 - 6956.898: 86.5254% ( 123) 00:06:53.850 6956.898 - 7007.311: 87.0204% ( 96) 00:06:53.850 7007.311 - 7057.723: 87.4536% ( 84) 00:06:53.850 7057.723 - 7108.135: 87.8249% ( 72) 00:06:53.850 7108.135 - 7158.548: 88.1446% ( 62) 00:06:53.850 7158.548 - 7208.960: 88.5004% ( 69) 00:06:53.850 7208.960 - 7259.372: 88.7634% ( 51) 00:06:53.850 7259.372 - 7309.785: 88.9645% ( 39) 00:06:53.850 7309.785 - 7360.197: 89.1708% ( 40) 00:06:53.850 7360.197 - 7410.609: 89.3564% ( 36) 00:06:53.850 7410.609 - 7461.022: 89.5421% ( 36) 00:06:53.850 7461.022 - 7511.434: 89.7690% ( 44) 00:06:53.850 7511.434 - 7561.846: 89.9392% ( 33) 00:06:53.850 7561.846 - 7612.258: 90.0939% ( 30) 00:06:53.850 7612.258 - 7662.671: 90.2486% ( 30) 00:06:53.850 7662.671 - 7713.083: 90.3929% ( 28) 00:06:53.850 7713.083 - 7763.495: 90.5476% ( 30) 00:06:53.850 7763.495 - 7813.908: 90.7127% ( 32) 00:06:53.850 7813.908 - 7864.320: 90.8828% ( 33) 00:06:53.850 7864.320 - 7914.732: 91.0840% ( 39) 00:06:53.850 7914.732 - 7965.145: 91.3108% ( 44) 00:06:53.850 7965.145 - 8015.557: 91.5171% ( 40) 00:06:53.850 8015.557 - 8065.969: 91.7182% ( 39) 00:06:53.850 8065.969 - 8116.382: 91.9400% ( 43) 00:06:53.850 8116.382 - 8166.794: 92.1772% ( 46) 00:06:53.850 8166.794 - 8217.206: 92.3680% ( 37) 00:06:53.850 8217.206 - 8267.618: 92.5794% ( 41) 00:06:53.850 8267.618 - 8318.031: 92.8063% ( 44) 00:06:53.850 8318.031 - 8368.443: 93.0074% ( 39) 00:06:53.850 8368.443 - 8418.855: 93.2189% ( 41) 00:06:53.850 8418.855 - 8469.268: 93.4097% ( 37) 00:06:53.850 8469.268 - 8519.680: 93.5953% ( 36) 00:06:53.850 8519.680 - 8570.092: 93.7603% ( 32) 00:06:53.850 8570.092 - 8620.505: 93.9511% ( 37) 00:06:53.850 8620.505 - 8670.917: 94.1832% ( 45) 00:06:53.850 8670.917 - 8721.329: 94.3998% ( 42) 00:06:53.850 8721.329 - 8771.742: 94.6009% ( 39) 00:06:53.850 8771.742 - 8822.154: 94.8020% ( 39) 00:06:53.850 8822.154 - 8872.566: 95.0134% ( 41) 00:06:53.850 8872.566 - 8922.978: 95.1784% ( 32) 00:06:53.850 8922.978 - 8973.391: 95.3692% ( 37) 00:06:53.850 8973.391 - 9023.803: 95.5394% ( 33) 00:06:53.850 9023.803 - 9074.215: 95.7044% ( 32) 00:06:53.850 9074.215 - 9124.628: 95.8333% ( 25) 00:06:53.850 9124.628 - 9175.040: 95.9777% ( 28) 00:06:53.850 9175.040 - 9225.452: 96.1066% ( 25) 00:06:53.850 9225.452 - 9275.865: 96.2149% ( 21) 00:06:53.850 9275.865 - 9326.277: 96.3232% ( 21) 00:06:53.850 9326.277 - 9376.689: 96.4676% ( 28) 00:06:53.850 9376.689 - 9427.102: 96.6378% ( 33) 00:06:53.850 9427.102 - 9477.514: 96.7925% ( 30) 00:06:53.850 9477.514 - 9527.926: 96.9317% ( 27) 00:06:53.850 9527.926 - 9578.338: 97.0710% ( 27) 00:06:53.850 9578.338 - 9628.751: 97.1999% ( 25) 00:06:53.850 9628.751 - 9679.163: 97.3133% ( 22) 00:06:53.850 9679.163 - 9729.575: 97.4371% ( 24) 00:06:53.850 9729.575 - 9779.988: 97.5351% ( 19) 00:06:53.850 9779.988 - 9830.400: 97.6537% ( 23) 00:06:53.850 9830.400 - 9880.812: 97.7620% ( 21) 00:06:53.850 9880.812 - 9931.225: 97.8651% ( 20) 00:06:53.850 9931.225 - 9981.637: 97.9579% ( 18) 00:06:53.850 9981.637 - 10032.049: 98.0611% ( 20) 00:06:53.850 10032.049 - 10082.462: 98.1487% ( 17) 00:06:53.850 10082.462 - 10132.874: 98.2415% ( 18) 00:06:53.850 10132.874 - 10183.286: 98.3292% ( 17) 00:06:53.850 10183.286 - 10233.698: 98.4117% ( 16) 00:06:53.850 10233.698 - 10284.111: 98.4478% ( 7) 00:06:53.850 10284.111 - 10334.523: 98.4839% ( 7) 00:06:53.850 10334.523 - 10384.935: 98.5252% ( 8) 00:06:53.850 10384.935 - 10435.348: 98.5509% ( 5) 00:06:53.850 10435.348 - 10485.760: 98.5716% ( 4) 00:06:53.850 10485.760 - 10536.172: 98.5922% ( 4) 00:06:53.850 10536.172 - 10586.585: 98.6025% ( 2) 00:06:53.850 10586.585 - 10636.997: 98.6128% ( 2) 00:06:53.850 10636.997 - 10687.409: 98.6283% ( 3) 00:06:53.850 10687.409 - 10737.822: 98.6489% ( 4) 00:06:53.850 10737.822 - 10788.234: 98.6747% ( 5) 00:06:53.850 10788.234 - 10838.646: 98.7005% ( 5) 00:06:53.850 10838.646 - 10889.058: 98.7469% ( 9) 00:06:53.850 10889.058 - 10939.471: 98.7933% ( 9) 00:06:53.850 10939.471 - 10989.883: 98.8346% ( 8) 00:06:53.850 10989.883 - 11040.295: 98.8552% ( 4) 00:06:53.850 11040.295 - 11090.708: 98.8707% ( 3) 00:06:53.850 11090.708 - 11141.120: 98.8965% ( 5) 00:06:53.850 11141.120 - 11191.532: 98.9222% ( 5) 00:06:53.850 11191.532 - 11241.945: 98.9429% ( 4) 00:06:53.850 11241.945 - 11292.357: 98.9738% ( 6) 00:06:53.850 11292.357 - 11342.769: 98.9944% ( 4) 00:06:53.850 11342.769 - 11393.182: 99.0202% ( 5) 00:06:53.850 11393.182 - 11443.594: 99.0460% ( 5) 00:06:53.850 11443.594 - 11494.006: 99.0769% ( 6) 00:06:53.850 11494.006 - 11544.418: 99.0976% ( 4) 00:06:53.850 11544.418 - 11594.831: 99.1285% ( 6) 00:06:53.850 11594.831 - 11645.243: 99.1491% ( 4) 00:06:53.850 11645.243 - 11695.655: 99.1698% ( 4) 00:06:53.850 11695.655 - 11746.068: 99.1904% ( 4) 00:06:53.850 11746.068 - 11796.480: 99.2213% ( 6) 00:06:53.850 11796.480 - 11846.892: 99.2420% ( 4) 00:06:53.850 11846.892 - 11897.305: 99.2677% ( 5) 00:06:53.850 11897.305 - 11947.717: 99.2935% ( 5) 00:06:53.851 11947.717 - 11998.129: 99.3142% ( 4) 00:06:53.851 11998.129 - 12048.542: 99.3296% ( 3) 00:06:53.851 12048.542 - 12098.954: 99.3399% ( 2) 00:06:53.851 18955.028 - 19055.852: 99.3554% ( 3) 00:06:53.851 19055.852 - 19156.677: 99.3760% ( 4) 00:06:53.851 19156.677 - 19257.502: 99.3967% ( 4) 00:06:53.851 19257.502 - 19358.326: 99.4121% ( 3) 00:06:53.851 19358.326 - 19459.151: 99.4328% ( 4) 00:06:53.851 19459.151 - 19559.975: 99.4534% ( 4) 00:06:53.851 19559.975 - 19660.800: 99.4740% ( 4) 00:06:53.851 19660.800 - 19761.625: 99.4895% ( 3) 00:06:53.851 19761.625 - 19862.449: 99.5101% ( 4) 00:06:53.851 19862.449 - 19963.274: 99.5307% ( 4) 00:06:53.851 19963.274 - 20064.098: 99.5462% ( 3) 00:06:53.851 20064.098 - 20164.923: 99.5668% ( 4) 00:06:53.851 20164.923 - 20265.748: 99.5875% ( 4) 00:06:53.851 20265.748 - 20366.572: 99.6029% ( 3) 00:06:53.851 20366.572 - 20467.397: 99.6236% ( 4) 00:06:53.851 20467.397 - 20568.222: 99.6442% ( 4) 00:06:53.851 20568.222 - 20669.046: 99.6648% ( 4) 00:06:53.851 20669.046 - 20769.871: 99.6700% ( 1) 00:06:53.851 24399.557 - 24500.382: 99.6906% ( 4) 00:06:53.851 24500.382 - 24601.206: 99.7112% ( 4) 00:06:53.851 24601.206 - 24702.031: 99.7267% ( 3) 00:06:53.851 24702.031 - 24802.855: 99.7473% ( 4) 00:06:53.851 24802.855 - 24903.680: 99.7628% ( 3) 00:06:53.851 24903.680 - 25004.505: 99.7834% ( 4) 00:06:53.851 25004.505 - 25105.329: 99.7989% ( 3) 00:06:53.851 25105.329 - 25206.154: 99.8195% ( 4) 00:06:53.851 25206.154 - 25306.978: 99.8401% ( 4) 00:06:53.851 25306.978 - 25407.803: 99.8556% ( 3) 00:06:53.851 25407.803 - 25508.628: 99.8762% ( 4) 00:06:53.851 25508.628 - 25609.452: 99.8969% ( 4) 00:06:53.851 25609.452 - 25710.277: 99.9175% ( 4) 00:06:53.851 25710.277 - 25811.102: 99.9381% ( 4) 00:06:53.851 25811.102 - 26012.751: 99.9691% ( 6) 00:06:53.851 26012.751 - 26214.400: 99.9897% ( 4) 00:06:53.851 26214.400 - 26416.049: 100.0000% ( 2) 00:06:53.851 00:06:53.851 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:53.851 ============================================================================== 00:06:53.851 Range in us Cumulative IO count 00:06:53.851 5494.942 - 5520.148: 0.0052% ( 1) 00:06:53.851 5520.148 - 5545.354: 0.0516% ( 9) 00:06:53.851 5545.354 - 5570.560: 0.1960% ( 28) 00:06:53.851 5570.560 - 5595.766: 0.4744% ( 54) 00:06:53.851 5595.766 - 5620.972: 1.0107% ( 104) 00:06:53.851 5620.972 - 5646.178: 1.7069% ( 135) 00:06:53.851 5646.178 - 5671.385: 2.5835% ( 170) 00:06:53.851 5671.385 - 5696.591: 3.7232% ( 221) 00:06:53.851 5696.591 - 5721.797: 5.1104% ( 269) 00:06:53.851 5721.797 - 5747.003: 6.7450% ( 317) 00:06:53.851 5747.003 - 5772.209: 8.4829% ( 337) 00:06:53.851 5772.209 - 5797.415: 10.2981% ( 352) 00:06:53.851 5797.415 - 5822.622: 12.1906% ( 367) 00:06:53.851 5822.622 - 5847.828: 14.1347% ( 377) 00:06:53.851 5847.828 - 5873.034: 16.1097% ( 383) 00:06:53.851 5873.034 - 5898.240: 18.2137% ( 408) 00:06:53.851 5898.240 - 5923.446: 20.3795% ( 420) 00:06:53.851 5923.446 - 5948.652: 22.5454% ( 420) 00:06:53.851 5948.652 - 5973.858: 24.8711% ( 451) 00:06:53.851 5973.858 - 5999.065: 27.0782% ( 428) 00:06:53.851 5999.065 - 6024.271: 29.3781% ( 446) 00:06:53.851 6024.271 - 6049.477: 31.7038% ( 451) 00:06:53.851 6049.477 - 6074.683: 34.0140% ( 448) 00:06:53.851 6074.683 - 6099.889: 36.2985% ( 443) 00:06:53.851 6099.889 - 6125.095: 38.6964% ( 465) 00:06:53.851 6125.095 - 6150.302: 41.1252% ( 471) 00:06:53.851 6150.302 - 6175.508: 43.5489% ( 470) 00:06:53.851 6175.508 - 6200.714: 46.0499% ( 485) 00:06:53.851 6200.714 - 6225.920: 48.5097% ( 477) 00:06:53.851 6225.920 - 6251.126: 50.9695% ( 477) 00:06:53.851 6251.126 - 6276.332: 53.4808% ( 487) 00:06:53.851 6276.332 - 6301.538: 55.9715% ( 483) 00:06:53.851 6301.538 - 6326.745: 58.4829% ( 487) 00:06:53.851 6326.745 - 6351.951: 61.0045% ( 489) 00:06:53.851 6351.951 - 6377.157: 63.4695% ( 478) 00:06:53.851 6377.157 - 6402.363: 65.9035% ( 472) 00:06:53.851 6402.363 - 6427.569: 68.1724% ( 440) 00:06:53.851 6427.569 - 6452.775: 70.3177% ( 416) 00:06:53.851 6452.775 - 6503.188: 74.0254% ( 719) 00:06:53.851 6503.188 - 6553.600: 76.9905% ( 575) 00:06:53.851 6553.600 - 6604.012: 79.1976% ( 428) 00:06:53.851 6604.012 - 6654.425: 80.8581% ( 322) 00:06:53.851 6654.425 - 6704.837: 82.1731% ( 255) 00:06:53.851 6704.837 - 6755.249: 83.3333% ( 225) 00:06:53.851 6755.249 - 6805.662: 84.2925% ( 186) 00:06:53.851 6805.662 - 6856.074: 85.1227% ( 161) 00:06:53.851 6856.074 - 6906.486: 85.8292% ( 137) 00:06:53.851 6906.486 - 6956.898: 86.4429% ( 119) 00:06:53.851 6956.898 - 7007.311: 86.9740% ( 103) 00:06:53.852 7007.311 - 7057.723: 87.4587% ( 94) 00:06:53.852 7057.723 - 7108.135: 87.8042% ( 67) 00:06:53.852 7108.135 - 7158.548: 88.1394% ( 65) 00:06:53.852 7158.548 - 7208.960: 88.3818% ( 47) 00:06:53.852 7208.960 - 7259.372: 88.6242% ( 47) 00:06:53.852 7259.372 - 7309.785: 88.8356% ( 41) 00:06:53.852 7309.785 - 7360.197: 89.0161% ( 35) 00:06:53.852 7360.197 - 7410.609: 89.2120% ( 38) 00:06:53.852 7410.609 - 7461.022: 89.4080% ( 38) 00:06:53.852 7461.022 - 7511.434: 89.6246% ( 42) 00:06:53.852 7511.434 - 7561.846: 89.8051% ( 35) 00:06:53.852 7561.846 - 7612.258: 90.0062% ( 39) 00:06:53.852 7612.258 - 7662.671: 90.1712% ( 32) 00:06:53.852 7662.671 - 7713.083: 90.3672% ( 38) 00:06:53.852 7713.083 - 7763.495: 90.5941% ( 44) 00:06:53.852 7763.495 - 7813.908: 90.8106% ( 42) 00:06:53.852 7813.908 - 7864.320: 91.0324% ( 43) 00:06:53.852 7864.320 - 7914.732: 91.2438% ( 41) 00:06:53.852 7914.732 - 7965.145: 91.4862% ( 47) 00:06:53.852 7965.145 - 8015.557: 91.7079% ( 43) 00:06:53.852 8015.557 - 8065.969: 91.9348% ( 44) 00:06:53.852 8065.969 - 8116.382: 92.1566% ( 43) 00:06:53.852 8116.382 - 8166.794: 92.3886% ( 45) 00:06:53.852 8166.794 - 8217.206: 92.6310% ( 47) 00:06:53.852 8217.206 - 8267.618: 92.8837% ( 49) 00:06:53.852 8267.618 - 8318.031: 93.1002% ( 42) 00:06:53.852 8318.031 - 8368.443: 93.2962% ( 38) 00:06:53.852 8368.443 - 8418.855: 93.4870% ( 37) 00:06:53.852 8418.855 - 8469.268: 93.6778% ( 37) 00:06:53.852 8469.268 - 8519.680: 93.8531% ( 34) 00:06:53.852 8519.680 - 8570.092: 94.0027% ( 29) 00:06:53.852 8570.092 - 8620.505: 94.1264% ( 24) 00:06:53.852 8620.505 - 8670.917: 94.2450% ( 23) 00:06:53.852 8670.917 - 8721.329: 94.3998% ( 30) 00:06:53.852 8721.329 - 8771.742: 94.5545% ( 30) 00:06:53.852 8771.742 - 8822.154: 94.7040% ( 29) 00:06:53.852 8822.154 - 8872.566: 94.8535% ( 29) 00:06:53.852 8872.566 - 8922.978: 95.0031% ( 29) 00:06:53.852 8922.978 - 8973.391: 95.1475% ( 28) 00:06:53.852 8973.391 - 9023.803: 95.3022% ( 30) 00:06:53.852 9023.803 - 9074.215: 95.4775% ( 34) 00:06:53.852 9074.215 - 9124.628: 95.6322% ( 30) 00:06:53.852 9124.628 - 9175.040: 95.7611% ( 25) 00:06:53.852 9175.040 - 9225.452: 95.9210% ( 31) 00:06:53.852 9225.452 - 9275.865: 96.0705% ( 29) 00:06:53.852 9275.865 - 9326.277: 96.2459% ( 34) 00:06:53.852 9326.277 - 9376.689: 96.4006% ( 30) 00:06:53.852 9376.689 - 9427.102: 96.5708% ( 33) 00:06:53.852 9427.102 - 9477.514: 96.7512% ( 35) 00:06:53.852 9477.514 - 9527.926: 96.9214% ( 33) 00:06:53.852 9527.926 - 9578.338: 97.0658% ( 28) 00:06:53.852 9578.338 - 9628.751: 97.1999% ( 26) 00:06:53.852 9628.751 - 9679.163: 97.3236% ( 24) 00:06:53.852 9679.163 - 9729.575: 97.4577% ( 26) 00:06:53.852 9729.575 - 9779.988: 97.5763% ( 23) 00:06:53.852 9779.988 - 9830.400: 97.7001% ( 24) 00:06:53.852 9830.400 - 9880.812: 97.8032% ( 20) 00:06:53.852 9880.812 - 9931.225: 97.8960% ( 18) 00:06:53.852 9931.225 - 9981.637: 97.9785% ( 16) 00:06:53.852 9981.637 - 10032.049: 98.0404% ( 12) 00:06:53.852 10032.049 - 10082.462: 98.0972% ( 11) 00:06:53.852 10082.462 - 10132.874: 98.1539% ( 11) 00:06:53.852 10132.874 - 10183.286: 98.2209% ( 13) 00:06:53.852 10183.286 - 10233.698: 98.2931% ( 14) 00:06:53.852 10233.698 - 10284.111: 98.3705% ( 15) 00:06:53.852 10284.111 - 10334.523: 98.4375% ( 13) 00:06:53.852 10334.523 - 10384.935: 98.4891% ( 10) 00:06:53.852 10384.935 - 10435.348: 98.5458% ( 11) 00:06:53.852 10435.348 - 10485.760: 98.5767% ( 6) 00:06:53.852 10485.760 - 10536.172: 98.6180% ( 8) 00:06:53.852 10536.172 - 10586.585: 98.6850% ( 13) 00:06:53.852 10586.585 - 10636.997: 98.7314% ( 9) 00:06:53.852 10636.997 - 10687.409: 98.7727% ( 8) 00:06:53.852 10687.409 - 10737.822: 98.8243% ( 10) 00:06:53.852 10737.822 - 10788.234: 98.8707% ( 9) 00:06:53.852 10788.234 - 10838.646: 98.9119% ( 8) 00:06:53.852 10838.646 - 10889.058: 98.9377% ( 5) 00:06:53.852 10889.058 - 10939.471: 98.9532% ( 3) 00:06:53.852 10939.471 - 10989.883: 98.9841% ( 6) 00:06:53.852 10989.883 - 11040.295: 99.0047% ( 4) 00:06:53.852 11040.295 - 11090.708: 99.0305% ( 5) 00:06:53.852 11090.708 - 11141.120: 99.0460% ( 3) 00:06:53.852 11141.120 - 11191.532: 99.0666% ( 4) 00:06:53.852 11191.532 - 11241.945: 99.0873% ( 4) 00:06:53.852 11241.945 - 11292.357: 99.1130% ( 5) 00:06:53.852 11292.357 - 11342.769: 99.1337% ( 4) 00:06:53.852 11342.769 - 11393.182: 99.1543% ( 4) 00:06:53.852 11393.182 - 11443.594: 99.1749% ( 4) 00:06:53.852 11443.594 - 11494.006: 99.1904% ( 3) 00:06:53.852 11494.006 - 11544.418: 99.2007% ( 2) 00:06:53.852 11544.418 - 11594.831: 99.2110% ( 2) 00:06:53.852 11594.831 - 11645.243: 99.2213% ( 2) 00:06:53.852 11645.243 - 11695.655: 99.2265% ( 1) 00:06:53.852 11695.655 - 11746.068: 99.2368% ( 2) 00:06:53.852 11746.068 - 11796.480: 99.2471% ( 2) 00:06:53.852 11796.480 - 11846.892: 99.2574% ( 2) 00:06:53.852 11846.892 - 11897.305: 99.2626% ( 1) 00:06:53.852 11897.305 - 11947.717: 99.2677% ( 1) 00:06:53.852 11947.717 - 11998.129: 99.2781% ( 2) 00:06:53.852 11998.129 - 12048.542: 99.2884% ( 2) 00:06:53.852 12048.542 - 12098.954: 99.2935% ( 1) 00:06:53.852 12098.954 - 12149.366: 99.3038% ( 2) 00:06:53.852 12149.366 - 12199.778: 99.3090% ( 1) 00:06:53.852 12199.778 - 12250.191: 99.3193% ( 2) 00:06:53.852 12250.191 - 12300.603: 99.3245% ( 1) 00:06:53.852 12300.603 - 12351.015: 99.3296% ( 1) 00:06:53.852 12351.015 - 12401.428: 99.3348% ( 1) 00:06:53.852 12401.428 - 12451.840: 99.3399% ( 1) 00:06:53.852 17140.185 - 17241.009: 99.3606% ( 4) 00:06:53.852 17241.009 - 17341.834: 99.3760% ( 3) 00:06:53.853 17341.834 - 17442.658: 99.3915% ( 3) 00:06:53.853 17442.658 - 17543.483: 99.4121% ( 4) 00:06:53.853 17543.483 - 17644.308: 99.4328% ( 4) 00:06:53.853 17644.308 - 17745.132: 99.4482% ( 3) 00:06:53.853 17745.132 - 17845.957: 99.4637% ( 3) 00:06:53.853 17845.957 - 17946.782: 99.4792% ( 3) 00:06:53.853 17946.782 - 18047.606: 99.4946% ( 3) 00:06:53.853 18047.606 - 18148.431: 99.5101% ( 3) 00:06:53.853 18148.431 - 18249.255: 99.5307% ( 4) 00:06:53.853 18249.255 - 18350.080: 99.5462% ( 3) 00:06:53.853 18350.080 - 18450.905: 99.5617% ( 3) 00:06:53.853 18450.905 - 18551.729: 99.5771% ( 3) 00:06:53.853 18551.729 - 18652.554: 99.5926% ( 3) 00:06:53.853 18652.554 - 18753.378: 99.6081% ( 3) 00:06:53.853 18753.378 - 18854.203: 99.6287% ( 4) 00:06:53.853 18854.203 - 18955.028: 99.6493% ( 4) 00:06:53.853 18955.028 - 19055.852: 99.6700% ( 4) 00:06:53.853 22685.538 - 22786.363: 99.6751% ( 1) 00:06:53.853 22786.363 - 22887.188: 99.6906% ( 3) 00:06:53.853 22887.188 - 22988.012: 99.7112% ( 4) 00:06:53.853 22988.012 - 23088.837: 99.7318% ( 4) 00:06:53.853 23088.837 - 23189.662: 99.7525% ( 4) 00:06:53.853 23189.662 - 23290.486: 99.7679% ( 3) 00:06:53.853 23290.486 - 23391.311: 99.7834% ( 3) 00:06:53.853 23391.311 - 23492.135: 99.8040% ( 4) 00:06:53.853 23492.135 - 23592.960: 99.8195% ( 3) 00:06:53.853 23592.960 - 23693.785: 99.8401% ( 4) 00:06:53.853 23693.785 - 23794.609: 99.8556% ( 3) 00:06:53.853 23794.609 - 23895.434: 99.8762% ( 4) 00:06:53.853 23895.434 - 23996.258: 99.8969% ( 4) 00:06:53.853 23996.258 - 24097.083: 99.9123% ( 3) 00:06:53.853 24097.083 - 24197.908: 99.9330% ( 4) 00:06:53.853 24197.908 - 24298.732: 99.9536% ( 4) 00:06:53.853 24298.732 - 24399.557: 99.9691% ( 3) 00:06:53.853 24399.557 - 24500.382: 99.9897% ( 4) 00:06:53.853 24500.382 - 24601.206: 100.0000% ( 2) 00:06:53.853 00:06:53.853 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:53.853 ============================================================================== 00:06:53.853 Range in us Cumulative IO count 00:06:53.853 5520.148 - 5545.354: 0.0361% ( 7) 00:06:53.853 5545.354 - 5570.560: 0.1805% ( 28) 00:06:53.853 5570.560 - 5595.766: 0.4125% ( 45) 00:06:53.853 5595.766 - 5620.972: 0.8457% ( 84) 00:06:53.853 5620.972 - 5646.178: 1.6141% ( 149) 00:06:53.853 5646.178 - 5671.385: 2.6093% ( 193) 00:06:53.853 5671.385 - 5696.591: 3.7335% ( 218) 00:06:53.853 5696.591 - 5721.797: 5.1155% ( 268) 00:06:53.853 5721.797 - 5747.003: 6.5182% ( 272) 00:06:53.853 5747.003 - 5772.209: 8.1735% ( 321) 00:06:53.853 5772.209 - 5797.415: 10.0712% ( 368) 00:06:53.853 5797.415 - 5822.622: 12.0565% ( 385) 00:06:53.853 5822.622 - 5847.828: 14.0161% ( 380) 00:06:53.853 5847.828 - 5873.034: 16.0582% ( 396) 00:06:53.853 5873.034 - 5898.240: 18.2034% ( 416) 00:06:53.853 5898.240 - 5923.446: 20.3073% ( 408) 00:06:53.853 5923.446 - 5948.652: 22.5763% ( 440) 00:06:53.853 5948.652 - 5973.858: 24.8505% ( 441) 00:06:53.853 5973.858 - 5999.065: 27.2329% ( 462) 00:06:53.853 5999.065 - 6024.271: 29.5637% ( 452) 00:06:53.853 6024.271 - 6049.477: 31.9410% ( 461) 00:06:53.853 6049.477 - 6074.683: 34.3028% ( 458) 00:06:53.853 6074.683 - 6099.889: 36.6543% ( 456) 00:06:53.853 6099.889 - 6125.095: 39.0419% ( 463) 00:06:53.853 6125.095 - 6150.302: 41.5017% ( 477) 00:06:53.853 6150.302 - 6175.508: 43.9408% ( 473) 00:06:53.853 6175.508 - 6200.714: 46.4367% ( 484) 00:06:53.853 6200.714 - 6225.920: 48.9429% ( 486) 00:06:53.853 6225.920 - 6251.126: 51.4955% ( 495) 00:06:53.853 6251.126 - 6276.332: 54.0274% ( 491) 00:06:53.853 6276.332 - 6301.538: 56.5646% ( 492) 00:06:53.853 6301.538 - 6326.745: 59.0862% ( 489) 00:06:53.853 6326.745 - 6351.951: 61.6337% ( 494) 00:06:53.853 6351.951 - 6377.157: 64.0934% ( 477) 00:06:53.853 6377.157 - 6402.363: 66.5068% ( 468) 00:06:53.853 6402.363 - 6427.569: 68.7603% ( 437) 00:06:53.853 6427.569 - 6452.775: 70.8282% ( 401) 00:06:53.853 6452.775 - 6503.188: 74.4018% ( 693) 00:06:53.853 6503.188 - 6553.600: 77.3309% ( 568) 00:06:53.853 6553.600 - 6604.012: 79.4812% ( 417) 00:06:53.853 6604.012 - 6654.425: 81.0644% ( 307) 00:06:53.853 6654.425 - 6704.837: 82.3278% ( 245) 00:06:53.853 6704.837 - 6755.249: 83.4210% ( 212) 00:06:53.853 6755.249 - 6805.662: 84.3234% ( 175) 00:06:53.853 6805.662 - 6856.074: 85.0969% ( 150) 00:06:53.853 6856.074 - 6906.486: 85.7880% ( 134) 00:06:53.853 6906.486 - 6956.898: 86.4119% ( 121) 00:06:53.853 6956.898 - 7007.311: 86.9070% ( 96) 00:06:53.853 7007.311 - 7057.723: 87.3401% ( 84) 00:06:53.853 7057.723 - 7108.135: 87.6650% ( 63) 00:06:53.853 7108.135 - 7158.548: 87.9280% ( 51) 00:06:53.853 7158.548 - 7208.960: 88.1704% ( 47) 00:06:53.853 7208.960 - 7259.372: 88.4179% ( 48) 00:06:53.853 7259.372 - 7309.785: 88.6757% ( 50) 00:06:53.853 7309.785 - 7360.197: 88.9078% ( 45) 00:06:53.853 7360.197 - 7410.609: 89.1295% ( 43) 00:06:53.853 7410.609 - 7461.022: 89.3203% ( 37) 00:06:53.853 7461.022 - 7511.434: 89.5318% ( 41) 00:06:53.853 7511.434 - 7561.846: 89.7380% ( 40) 00:06:53.853 7561.846 - 7612.258: 89.9443% ( 40) 00:06:53.853 7612.258 - 7662.671: 90.1351% ( 37) 00:06:53.853 7662.671 - 7713.083: 90.3465% ( 41) 00:06:53.853 7713.083 - 7763.495: 90.5941% ( 48) 00:06:53.853 7763.495 - 7813.908: 90.8622% ( 52) 00:06:53.853 7813.908 - 7864.320: 91.0840% ( 43) 00:06:53.853 7864.320 - 7914.732: 91.3005% ( 42) 00:06:53.853 7914.732 - 7965.145: 91.5171% ( 42) 00:06:53.853 7965.145 - 8015.557: 91.7337% ( 42) 00:06:53.853 8015.557 - 8065.969: 91.9761% ( 47) 00:06:53.853 8065.969 - 8116.382: 92.1720% ( 38) 00:06:53.853 8116.382 - 8166.794: 92.3680% ( 38) 00:06:53.853 8166.794 - 8217.206: 92.5691% ( 39) 00:06:53.853 8217.206 - 8267.618: 92.8218% ( 49) 00:06:53.853 8267.618 - 8318.031: 93.0126% ( 37) 00:06:53.854 8318.031 - 8368.443: 93.2189% ( 40) 00:06:53.854 8368.443 - 8418.855: 93.3890% ( 33) 00:06:53.854 8418.855 - 8469.268: 93.5644% ( 34) 00:06:53.854 8469.268 - 8519.680: 93.7294% ( 32) 00:06:53.854 8519.680 - 8570.092: 93.8841% ( 30) 00:06:53.854 8570.092 - 8620.505: 93.9821% ( 19) 00:06:53.854 8620.505 - 8670.917: 94.0955% ( 22) 00:06:53.854 8670.917 - 8721.329: 94.1986% ( 20) 00:06:53.854 8721.329 - 8771.742: 94.3585% ( 31) 00:06:53.854 8771.742 - 8822.154: 94.5287% ( 33) 00:06:53.854 8822.154 - 8872.566: 94.6937% ( 32) 00:06:53.854 8872.566 - 8922.978: 94.8690% ( 34) 00:06:53.854 8922.978 - 8973.391: 95.0340% ( 32) 00:06:53.854 8973.391 - 9023.803: 95.1733% ( 27) 00:06:53.854 9023.803 - 9074.215: 95.2970% ( 24) 00:06:53.854 9074.215 - 9124.628: 95.4208% ( 24) 00:06:53.854 9124.628 - 9175.040: 95.5549% ( 26) 00:06:53.854 9175.040 - 9225.452: 95.7096% ( 30) 00:06:53.854 9225.452 - 9275.865: 95.9107% ( 39) 00:06:53.854 9275.865 - 9326.277: 96.1273% ( 42) 00:06:53.854 9326.277 - 9376.689: 96.3232% ( 38) 00:06:53.854 9376.689 - 9427.102: 96.5398% ( 42) 00:06:53.854 9427.102 - 9477.514: 96.7512% ( 41) 00:06:53.854 9477.514 - 9527.926: 96.9420% ( 37) 00:06:53.854 9527.926 - 9578.338: 97.1122% ( 33) 00:06:53.854 9578.338 - 9628.751: 97.2257% ( 22) 00:06:53.854 9628.751 - 9679.163: 97.3236% ( 19) 00:06:53.854 9679.163 - 9729.575: 97.4268% ( 20) 00:06:53.854 9729.575 - 9779.988: 97.5351% ( 21) 00:06:53.854 9779.988 - 9830.400: 97.6382% ( 20) 00:06:53.854 9830.400 - 9880.812: 97.7310% ( 18) 00:06:53.854 9880.812 - 9931.225: 97.8342% ( 20) 00:06:53.854 9931.225 - 9981.637: 97.9321% ( 19) 00:06:53.854 9981.637 - 10032.049: 97.9992% ( 13) 00:06:53.854 10032.049 - 10082.462: 98.0765% ( 15) 00:06:53.854 10082.462 - 10132.874: 98.1436% ( 13) 00:06:53.854 10132.874 - 10183.286: 98.2054% ( 12) 00:06:53.854 10183.286 - 10233.698: 98.2622% ( 11) 00:06:53.854 10233.698 - 10284.111: 98.3395% ( 15) 00:06:53.854 10284.111 - 10334.523: 98.3911% ( 10) 00:06:53.854 10334.523 - 10384.935: 98.4427% ( 10) 00:06:53.854 10384.935 - 10435.348: 98.4839% ( 8) 00:06:53.854 10435.348 - 10485.760: 98.5200% ( 7) 00:06:53.854 10485.760 - 10536.172: 98.5664% ( 9) 00:06:53.854 10536.172 - 10586.585: 98.6077% ( 8) 00:06:53.854 10586.585 - 10636.997: 98.6541% ( 9) 00:06:53.854 10636.997 - 10687.409: 98.6953% ( 8) 00:06:53.854 10687.409 - 10737.822: 98.7314% ( 7) 00:06:53.854 10737.822 - 10788.234: 98.7624% ( 6) 00:06:53.854 10788.234 - 10838.646: 98.7985% ( 7) 00:06:53.854 10838.646 - 10889.058: 98.8397% ( 8) 00:06:53.854 10889.058 - 10939.471: 98.8707% ( 6) 00:06:53.854 10939.471 - 10989.883: 98.9171% ( 9) 00:06:53.854 10989.883 - 11040.295: 98.9532% ( 7) 00:06:53.854 11040.295 - 11090.708: 98.9944% ( 8) 00:06:53.854 11090.708 - 11141.120: 99.0254% ( 6) 00:06:53.854 11141.120 - 11191.532: 99.0718% ( 9) 00:06:53.854 11191.532 - 11241.945: 99.0976% ( 5) 00:06:53.854 11241.945 - 11292.357: 99.1440% ( 9) 00:06:53.854 11292.357 - 11342.769: 99.1801% ( 7) 00:06:53.854 11342.769 - 11393.182: 99.2007% ( 4) 00:06:53.854 11393.182 - 11443.594: 99.2213% ( 4) 00:06:53.854 11443.594 - 11494.006: 99.2316% ( 2) 00:06:53.854 11494.006 - 11544.418: 99.2471% ( 3) 00:06:53.854 11544.418 - 11594.831: 99.2574% ( 2) 00:06:53.854 11594.831 - 11645.243: 99.2677% ( 2) 00:06:53.854 11645.243 - 11695.655: 99.2781% ( 2) 00:06:53.854 11695.655 - 11746.068: 99.2935% ( 3) 00:06:53.854 11746.068 - 11796.480: 99.3038% ( 2) 00:06:53.854 11796.480 - 11846.892: 99.3193% ( 3) 00:06:53.854 11846.892 - 11897.305: 99.3296% ( 2) 00:06:53.854 11897.305 - 11947.717: 99.3399% ( 2) 00:06:53.854 15224.517 - 15325.342: 99.3451% ( 1) 00:06:53.854 15325.342 - 15426.166: 99.3606% ( 3) 00:06:53.854 15426.166 - 15526.991: 99.3812% ( 4) 00:06:53.854 15526.991 - 15627.815: 99.4018% ( 4) 00:06:53.854 15627.815 - 15728.640: 99.4224% ( 4) 00:06:53.854 15728.640 - 15829.465: 99.4431% ( 4) 00:06:53.854 15829.465 - 15930.289: 99.4585% ( 3) 00:06:53.854 15930.289 - 16031.114: 99.4792% ( 4) 00:06:53.854 16031.114 - 16131.938: 99.4998% ( 4) 00:06:53.854 16131.938 - 16232.763: 99.5153% ( 3) 00:06:53.854 16232.763 - 16333.588: 99.5359% ( 4) 00:06:53.854 16333.588 - 16434.412: 99.5514% ( 3) 00:06:53.854 16434.412 - 16535.237: 99.5668% ( 3) 00:06:53.854 16535.237 - 16636.062: 99.5875% ( 4) 00:06:53.854 16636.062 - 16736.886: 99.6029% ( 3) 00:06:53.854 16736.886 - 16837.711: 99.6236% ( 4) 00:06:53.854 16837.711 - 16938.535: 99.6442% ( 4) 00:06:53.854 16938.535 - 17039.360: 99.6648% ( 4) 00:06:53.854 17039.360 - 17140.185: 99.6700% ( 1) 00:06:53.854 20669.046 - 20769.871: 99.6751% ( 1) 00:06:53.854 20769.871 - 20870.695: 99.6906% ( 3) 00:06:53.854 20870.695 - 20971.520: 99.7061% ( 3) 00:06:53.854 20971.520 - 21072.345: 99.7267% ( 4) 00:06:53.854 21072.345 - 21173.169: 99.7473% ( 4) 00:06:53.854 21173.169 - 21273.994: 99.7628% ( 3) 00:06:53.854 21273.994 - 21374.818: 99.7783% ( 3) 00:06:53.854 21374.818 - 21475.643: 99.7937% ( 3) 00:06:53.854 21475.643 - 21576.468: 99.8092% ( 3) 00:06:53.854 21576.468 - 21677.292: 99.8247% ( 3) 00:06:53.854 21677.292 - 21778.117: 99.8453% ( 4) 00:06:53.854 21778.117 - 21878.942: 99.8608% ( 3) 00:06:53.854 21878.942 - 21979.766: 99.8762% ( 3) 00:06:53.854 21979.766 - 22080.591: 99.8969% ( 4) 00:06:53.854 22080.591 - 22181.415: 99.9123% ( 3) 00:06:53.854 22181.415 - 22282.240: 99.9330% ( 4) 00:06:53.854 22282.240 - 22383.065: 99.9484% ( 3) 00:06:53.855 22383.065 - 22483.889: 99.9691% ( 4) 00:06:53.855 22483.889 - 22584.714: 99.9845% ( 3) 00:06:53.855 22584.714 - 22685.538: 100.0000% ( 3) 00:06:53.855 00:06:53.855 06:05:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:06:55.236 Initializing NVMe Controllers 00:06:55.236 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:55.236 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:55.236 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:55.236 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:55.236 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:55.236 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:55.237 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:55.237 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:55.237 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:55.237 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:55.237 Initialization complete. Launching workers. 00:06:55.237 ======================================================== 00:06:55.237 Latency(us) 00:06:55.237 Device Information : IOPS MiB/s Average min max 00:06:55.237 PCIE (0000:00:10.0) NSID 1 from core 0: 13311.97 156.00 10983.24 5667.40 222571.71 00:06:55.237 PCIE (0000:00:11.0) NSID 1 from core 0: 13258.97 155.38 10971.97 5801.25 223370.91 00:06:55.237 PCIE (0000:00:13.0) NSID 1 from core 0: 13247.97 155.25 10965.09 5411.21 225354.89 00:06:55.237 PCIE (0000:00:12.0) NSID 1 from core 0: 13247.97 155.25 10949.50 5254.78 225733.43 00:06:55.237 PCIE (0000:00:12.0) NSID 2 from core 0: 13206.97 154.77 10967.80 5324.55 229242.47 00:06:55.237 PCIE (0000:00:12.0) NSID 3 from core 0: 13247.97 155.25 10918.45 5854.63 229937.67 00:06:55.237 ======================================================== 00:06:55.237 Total : 79521.84 931.90 10959.36 5254.78 229937.67 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6074.683us 00:06:55.237 10.00000% : 6452.775us 00:06:55.237 25.00000% : 6704.837us 00:06:55.237 50.00000% : 7007.311us 00:06:55.237 75.00000% : 7662.671us 00:06:55.237 90.00000% : 8973.391us 00:06:55.237 95.00000% : 10989.883us 00:06:55.237 98.00000% : 33272.123us 00:06:55.237 99.00000% : 157286.400us 00:06:55.237 99.50000% : 216167.975us 00:06:55.237 99.90000% : 222620.751us 00:06:55.237 99.99000% : 222620.751us 00:06:55.237 99.99900% : 222620.751us 00:06:55.237 99.99990% : 222620.751us 00:06:55.237 99.99999% : 222620.751us 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6125.095us 00:06:55.237 10.00000% : 6553.600us 00:06:55.237 25.00000% : 6755.249us 00:06:55.237 50.00000% : 7007.311us 00:06:55.237 75.00000% : 7612.258us 00:06:55.237 90.00000% : 8822.154us 00:06:55.237 95.00000% : 11090.708us 00:06:55.237 98.00000% : 31457.280us 00:06:55.237 99.00000% : 162932.578us 00:06:55.237 99.50000% : 216167.975us 00:06:55.237 99.90000% : 224233.945us 00:06:55.237 99.99000% : 224233.945us 00:06:55.237 99.99900% : 224233.945us 00:06:55.237 99.99990% : 224233.945us 00:06:55.237 99.99999% : 224233.945us 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6049.477us 00:06:55.237 10.00000% : 6553.600us 00:06:55.237 25.00000% : 6755.249us 00:06:55.237 50.00000% : 7007.311us 00:06:55.237 75.00000% : 7612.258us 00:06:55.237 90.00000% : 8872.566us 00:06:55.237 95.00000% : 10889.058us 00:06:55.237 98.00000% : 30449.034us 00:06:55.237 99.00000% : 154060.012us 00:06:55.237 99.50000% : 217781.169us 00:06:55.237 99.90000% : 225847.138us 00:06:55.237 99.99000% : 225847.138us 00:06:55.237 99.99900% : 225847.138us 00:06:55.237 99.99990% : 225847.138us 00:06:55.237 99.99999% : 225847.138us 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6074.683us 00:06:55.237 10.00000% : 6553.600us 00:06:55.237 25.00000% : 6755.249us 00:06:55.237 50.00000% : 7007.311us 00:06:55.237 75.00000% : 7612.258us 00:06:55.237 90.00000% : 8872.566us 00:06:55.237 95.00000% : 10636.997us 00:06:55.237 98.00000% : 28634.191us 00:06:55.237 99.00000% : 154060.012us 00:06:55.237 99.50000% : 217781.169us 00:06:55.237 99.90000% : 225847.138us 00:06:55.237 99.99000% : 225847.138us 00:06:55.237 99.99900% : 225847.138us 00:06:55.237 99.99990% : 225847.138us 00:06:55.237 99.99999% : 225847.138us 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6150.302us 00:06:55.237 10.00000% : 6553.600us 00:06:55.237 25.00000% : 6755.249us 00:06:55.237 50.00000% : 6956.898us 00:06:55.237 75.00000% : 7662.671us 00:06:55.237 90.00000% : 8973.391us 00:06:55.237 95.00000% : 10485.760us 00:06:55.237 98.00000% : 26819.348us 00:06:55.237 99.00000% : 157286.400us 00:06:55.237 99.50000% : 217781.169us 00:06:55.237 99.90000% : 229073.526us 00:06:55.237 99.99000% : 230686.720us 00:06:55.237 99.99900% : 230686.720us 00:06:55.237 99.99990% : 230686.720us 00:06:55.237 99.99999% : 230686.720us 00:06:55.237 00:06:55.237 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:55.237 ================================================================================= 00:06:55.237 1.00000% : 6150.302us 00:06:55.237 10.00000% : 6553.600us 00:06:55.237 25.00000% : 6755.249us 00:06:55.237 50.00000% : 7007.311us 00:06:55.237 75.00000% : 7662.671us 00:06:55.237 90.00000% : 8922.978us 00:06:55.237 95.00000% : 10636.997us 00:06:55.237 98.00000% : 21072.345us 00:06:55.237 99.00000% : 154866.609us 00:06:55.237 99.50000% : 222620.751us 00:06:55.237 99.90000% : 230686.720us 00:06:55.237 99.99000% : 230686.720us 00:06:55.237 99.99900% : 230686.720us 00:06:55.237 99.99990% : 230686.720us 00:06:55.237 99.99999% : 230686.720us 00:06:55.237 00:06:55.237 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:55.237 ============================================================================== 00:06:55.237 Range in us Cumulative IO count 00:06:55.237 5646.178 - 5671.385: 0.0075% ( 1) 00:06:55.237 5721.797 - 5747.003: 0.0150% ( 1) 00:06:55.237 5772.209 - 5797.415: 0.0300% ( 2) 00:06:55.237 5797.415 - 5822.622: 0.0676% ( 5) 00:06:55.237 5822.622 - 5847.828: 0.2254% ( 21) 00:06:55.237 5847.828 - 5873.034: 0.4056% ( 24) 00:06:55.237 5873.034 - 5898.240: 0.4657% ( 8) 00:06:55.237 5898.240 - 5923.446: 0.6611% ( 26) 00:06:55.237 5923.446 - 5948.652: 0.7061% ( 6) 00:06:55.237 5948.652 - 5973.858: 0.7512% ( 6) 00:06:55.237 5973.858 - 5999.065: 0.8263% ( 10) 00:06:55.237 5999.065 - 6024.271: 0.9014% ( 10) 00:06:55.237 6024.271 - 6049.477: 0.9991% ( 13) 00:06:55.237 6049.477 - 6074.683: 1.1043% ( 14) 00:06:55.237 6074.683 - 6099.889: 1.3371% ( 31) 00:06:55.237 6099.889 - 6125.095: 1.6977% ( 48) 00:06:55.237 6125.095 - 6150.302: 2.2686% ( 76) 00:06:55.237 6150.302 - 6175.508: 2.6893% ( 56) 00:06:55.237 6175.508 - 6200.714: 3.0349% ( 46) 00:06:55.237 6200.714 - 6225.920: 3.4706% ( 58) 00:06:55.237 6225.920 - 6251.126: 3.8987% ( 57) 00:06:55.237 6251.126 - 6276.332: 4.3570% ( 61) 00:06:55.237 6276.332 - 6301.538: 5.3110% ( 127) 00:06:55.237 6301.538 - 6326.745: 5.9570% ( 86) 00:06:55.237 6326.745 - 6351.951: 6.6181% ( 88) 00:06:55.237 6351.951 - 6377.157: 7.1890% ( 76) 00:06:55.237 6377.157 - 6402.363: 8.2782% ( 145) 00:06:55.237 6402.363 - 6427.569: 9.4651% ( 158) 00:06:55.237 6427.569 - 6452.775: 10.7722% ( 174) 00:06:55.237 6452.775 - 6503.188: 13.8597% ( 411) 00:06:55.237 6503.188 - 6553.600: 17.3903% ( 470) 00:06:55.237 6553.600 - 6604.012: 21.1238% ( 497) 00:06:55.237 6604.012 - 6654.425: 24.8347% ( 494) 00:06:55.237 6654.425 - 6704.837: 28.7861% ( 526) 00:06:55.237 6704.837 - 6755.249: 32.3543% ( 475) 00:06:55.237 6755.249 - 6805.662: 36.2380% ( 517) 00:06:55.237 6805.662 - 6856.074: 39.9564% ( 495) 00:06:55.237 6856.074 - 6906.486: 43.9979% ( 538) 00:06:55.237 6906.486 - 6956.898: 47.4459% ( 459) 00:06:55.237 6956.898 - 7007.311: 50.3606% ( 388) 00:06:55.237 7007.311 - 7057.723: 53.6734% ( 441) 00:06:55.237 7057.723 - 7108.135: 56.7683% ( 412) 00:06:55.237 7108.135 - 7158.548: 59.6004% ( 377) 00:06:55.237 7158.548 - 7208.960: 62.4174% ( 375) 00:06:55.237 7208.960 - 7259.372: 64.8588% ( 325) 00:06:55.237 7259.372 - 7309.785: 66.5941% ( 231) 00:06:55.237 7309.785 - 7360.197: 68.1716% ( 210) 00:06:55.237 7360.197 - 7410.609: 69.6439% ( 196) 00:06:55.237 7410.609 - 7461.022: 70.8759% ( 164) 00:06:55.237 7461.022 - 7511.434: 72.0328% ( 154) 00:06:55.237 7511.434 - 7561.846: 73.1596% ( 150) 00:06:55.237 7561.846 - 7612.258: 74.2112% ( 140) 00:06:55.237 7612.258 - 7662.671: 75.1653% ( 127) 00:06:55.237 7662.671 - 7713.083: 75.9014% ( 98) 00:06:55.237 7713.083 - 7763.495: 76.9306% ( 137) 00:06:55.237 7763.495 - 7813.908: 77.6968% ( 102) 00:06:55.237 7813.908 - 7864.320: 78.5231% ( 110) 00:06:55.237 7864.320 - 7914.732: 79.3119% ( 105) 00:06:55.237 7914.732 - 7965.145: 79.8002% ( 65) 00:06:55.237 7965.145 - 8015.557: 80.4462% ( 86) 00:06:55.237 8015.557 - 8065.969: 81.0021% ( 74) 00:06:55.237 8065.969 - 8116.382: 81.7383% ( 98) 00:06:55.237 8116.382 - 8166.794: 82.3843% ( 86) 00:06:55.237 8166.794 - 8217.206: 82.8951% ( 68) 00:06:55.237 8217.206 - 8267.618: 83.5261% ( 84) 00:06:55.237 8267.618 - 8318.031: 84.0219% ( 66) 00:06:55.237 8318.031 - 8368.443: 84.5628% ( 72) 00:06:55.237 8368.443 - 8418.855: 85.3290% ( 102) 00:06:55.237 8418.855 - 8469.268: 86.0051% ( 90) 00:06:55.237 8469.268 - 8519.680: 86.7338% ( 97) 00:06:55.237 8519.680 - 8570.092: 87.2897% ( 74) 00:06:55.238 8570.092 - 8620.505: 87.9282% ( 85) 00:06:55.238 8620.505 - 8670.917: 88.3714% ( 59) 00:06:55.238 8670.917 - 8721.329: 88.7620% ( 52) 00:06:55.238 8721.329 - 8771.742: 89.1301% ( 49) 00:06:55.238 8771.742 - 8822.154: 89.4081% ( 37) 00:06:55.238 8822.154 - 8872.566: 89.6785% ( 36) 00:06:55.238 8872.566 - 8922.978: 89.9264% ( 33) 00:06:55.238 8922.978 - 8973.391: 90.2344% ( 41) 00:06:55.238 8973.391 - 9023.803: 90.4672% ( 31) 00:06:55.238 9023.803 - 9074.215: 90.8804% ( 55) 00:06:55.238 9074.215 - 9124.628: 91.2410% ( 48) 00:06:55.238 9124.628 - 9175.040: 91.6091% ( 49) 00:06:55.238 9175.040 - 9225.452: 91.8720% ( 35) 00:06:55.238 9225.452 - 9275.865: 92.0222% ( 20) 00:06:55.238 9275.865 - 9326.277: 92.1800% ( 21) 00:06:55.238 9326.277 - 9376.689: 92.3152% ( 18) 00:06:55.238 9376.689 - 9427.102: 92.4279% ( 15) 00:06:55.238 9427.102 - 9477.514: 92.5706% ( 19) 00:06:55.238 9477.514 - 9527.926: 92.6983% ( 17) 00:06:55.238 9527.926 - 9578.338: 92.8786% ( 24) 00:06:55.238 9578.338 - 9628.751: 92.9988% ( 16) 00:06:55.238 9628.751 - 9679.163: 93.1490% ( 20) 00:06:55.238 9679.163 - 9729.575: 93.2692% ( 16) 00:06:55.238 9729.575 - 9779.988: 93.3819% ( 15) 00:06:55.238 9779.988 - 9830.400: 93.4270% ( 6) 00:06:55.238 9830.400 - 9880.812: 93.4946% ( 9) 00:06:55.238 9880.812 - 9931.225: 93.5547% ( 8) 00:06:55.238 9931.225 - 9981.637: 93.6148% ( 8) 00:06:55.238 9981.637 - 10032.049: 93.6749% ( 8) 00:06:55.238 10032.049 - 10082.462: 93.7425% ( 9) 00:06:55.238 10082.462 - 10132.874: 93.7951% ( 7) 00:06:55.238 10132.874 - 10183.286: 93.8927% ( 13) 00:06:55.238 10183.286 - 10233.698: 94.0204% ( 17) 00:06:55.238 10233.698 - 10284.111: 94.1782% ( 21) 00:06:55.238 10284.111 - 10334.523: 94.2909% ( 15) 00:06:55.238 10334.523 - 10384.935: 94.3510% ( 8) 00:06:55.238 10384.935 - 10435.348: 94.4186% ( 9) 00:06:55.238 10435.348 - 10485.760: 94.4561% ( 5) 00:06:55.238 10485.760 - 10536.172: 94.5087% ( 7) 00:06:55.238 10536.172 - 10586.585: 94.5463% ( 5) 00:06:55.238 10586.585 - 10636.997: 94.6139% ( 9) 00:06:55.238 10636.997 - 10687.409: 94.6665% ( 7) 00:06:55.238 10687.409 - 10737.822: 94.7416% ( 10) 00:06:55.238 10737.822 - 10788.234: 94.8392% ( 13) 00:06:55.238 10788.234 - 10838.646: 94.8843% ( 6) 00:06:55.238 10838.646 - 10889.058: 94.9219% ( 5) 00:06:55.238 10889.058 - 10939.471: 94.9895% ( 9) 00:06:55.238 10939.471 - 10989.883: 95.0346% ( 6) 00:06:55.238 10989.883 - 11040.295: 95.0721% ( 5) 00:06:55.238 11040.295 - 11090.708: 95.1172% ( 6) 00:06:55.238 11090.708 - 11141.120: 95.1547% ( 5) 00:06:55.238 11141.120 - 11191.532: 95.1998% ( 6) 00:06:55.238 11191.532 - 11241.945: 95.2299% ( 4) 00:06:55.238 11241.945 - 11292.357: 95.2599% ( 4) 00:06:55.238 11292.357 - 11342.769: 95.3125% ( 7) 00:06:55.238 11342.769 - 11393.182: 95.3501% ( 5) 00:06:55.238 11393.182 - 11443.594: 95.3951% ( 6) 00:06:55.238 11443.594 - 11494.006: 95.4402% ( 6) 00:06:55.238 11494.006 - 11544.418: 95.4853% ( 6) 00:06:55.238 11544.418 - 11594.831: 95.5303% ( 6) 00:06:55.238 11594.831 - 11645.243: 95.5829% ( 7) 00:06:55.238 11645.243 - 11695.655: 95.6205% ( 5) 00:06:55.238 11695.655 - 11746.068: 95.6355% ( 2) 00:06:55.238 11746.068 - 11796.480: 95.6656% ( 4) 00:06:55.238 11796.480 - 11846.892: 95.6731% ( 1) 00:06:55.238 12905.551 - 13006.375: 95.6881% ( 2) 00:06:55.238 13006.375 - 13107.200: 95.7407% ( 7) 00:06:55.238 13107.200 - 13208.025: 95.7707% ( 4) 00:06:55.238 13208.025 - 13308.849: 95.8233% ( 7) 00:06:55.238 13308.849 - 13409.674: 95.8609% ( 5) 00:06:55.238 13409.674 - 13510.498: 95.8909% ( 4) 00:06:55.238 13510.498 - 13611.323: 95.9285% ( 5) 00:06:55.238 13611.323 - 13712.148: 95.9435% ( 2) 00:06:55.238 13712.148 - 13812.972: 95.9961% ( 7) 00:06:55.238 13812.972 - 13913.797: 96.0261% ( 4) 00:06:55.238 13913.797 - 14014.622: 96.0637% ( 5) 00:06:55.238 14014.622 - 14115.446: 96.1013% ( 5) 00:06:55.238 14115.446 - 14216.271: 96.1313% ( 4) 00:06:55.238 14216.271 - 14317.095: 96.1614% ( 4) 00:06:55.238 14317.095 - 14417.920: 96.2139% ( 7) 00:06:55.238 14417.920 - 14518.745: 96.2665% ( 7) 00:06:55.238 14518.745 - 14619.569: 96.3116% ( 6) 00:06:55.238 14619.569 - 14720.394: 96.3341% ( 3) 00:06:55.238 14720.394 - 14821.218: 96.3642% ( 4) 00:06:55.238 14821.218 - 14922.043: 96.4093% ( 6) 00:06:55.238 14922.043 - 15022.868: 96.4393% ( 4) 00:06:55.238 15022.868 - 15123.692: 96.4694% ( 4) 00:06:55.238 15123.692 - 15224.517: 96.5144% ( 6) 00:06:55.238 15224.517 - 15325.342: 96.5595% ( 6) 00:06:55.238 15325.342 - 15426.166: 96.5971% ( 5) 00:06:55.238 15426.166 - 15526.991: 96.6346% ( 5) 00:06:55.238 15627.815 - 15728.640: 96.6722% ( 5) 00:06:55.238 15728.640 - 15829.465: 96.7548% ( 11) 00:06:55.238 15829.465 - 15930.289: 96.7924% ( 5) 00:06:55.238 15930.289 - 16031.114: 96.8224% ( 4) 00:06:55.238 16031.114 - 16131.938: 96.8675% ( 6) 00:06:55.238 16131.938 - 16232.763: 96.9126% ( 6) 00:06:55.238 16232.763 - 16333.588: 96.9576% ( 6) 00:06:55.238 16333.588 - 16434.412: 96.9952% ( 5) 00:06:55.238 16434.412 - 16535.237: 97.0403% ( 6) 00:06:55.238 16535.237 - 16636.062: 97.0853% ( 6) 00:06:55.238 16636.062 - 16736.886: 97.1154% ( 4) 00:06:55.238 26617.698 - 26819.348: 97.1379% ( 3) 00:06:55.238 26819.348 - 27020.997: 97.1905% ( 7) 00:06:55.238 27020.997 - 27222.646: 97.2506% ( 8) 00:06:55.238 27222.646 - 27424.295: 97.3107% ( 8) 00:06:55.238 27424.295 - 27625.945: 97.3708% ( 8) 00:06:55.238 27625.945 - 27827.594: 97.4234% ( 7) 00:06:55.238 27827.594 - 28029.243: 97.4760% ( 7) 00:06:55.238 28029.243 - 28230.892: 97.5361% ( 8) 00:06:55.238 28230.892 - 28432.542: 97.5962% ( 8) 00:06:55.238 31658.929 - 31860.578: 97.6412% ( 6) 00:06:55.238 31860.578 - 32062.228: 97.7013% ( 8) 00:06:55.238 32062.228 - 32263.877: 97.7539% ( 7) 00:06:55.238 32263.877 - 32465.526: 97.8215% ( 9) 00:06:55.238 32465.526 - 32667.175: 97.8741% ( 7) 00:06:55.238 32667.175 - 32868.825: 97.9342% ( 8) 00:06:55.238 32868.825 - 33070.474: 97.9868% ( 7) 00:06:55.238 33070.474 - 33272.123: 98.0544% ( 9) 00:06:55.238 33272.123 - 33473.772: 98.0769% ( 3) 00:06:55.238 149220.431 - 150027.028: 98.1671% ( 12) 00:06:55.238 152446.818 - 153253.415: 98.5126% ( 46) 00:06:55.238 153253.415 - 154060.012: 98.5877% ( 10) 00:06:55.238 154060.012 - 154866.609: 98.9483% ( 48) 00:06:55.238 156479.803 - 157286.400: 99.0385% ( 12) 00:06:55.238 211328.394 - 212941.588: 99.3465% ( 41) 00:06:55.238 212941.588 - 214554.782: 99.3765% ( 4) 00:06:55.238 214554.782 - 216167.975: 99.5192% ( 19) 00:06:55.238 217781.169 - 219394.363: 99.6094% ( 12) 00:06:55.238 219394.363 - 221007.557: 99.6620% ( 7) 00:06:55.238 221007.557 - 222620.751: 100.0000% ( 45) 00:06:55.238 00:06:55.238 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:55.238 ============================================================================== 00:06:55.238 Range in us Cumulative IO count 00:06:55.238 5797.415 - 5822.622: 0.0302% ( 4) 00:06:55.238 5822.622 - 5847.828: 0.0603% ( 4) 00:06:55.238 5847.828 - 5873.034: 0.0980% ( 5) 00:06:55.238 5873.034 - 5898.240: 0.1508% ( 7) 00:06:55.238 5898.240 - 5923.446: 0.2112% ( 8) 00:06:55.238 5923.446 - 5948.652: 0.2715% ( 8) 00:06:55.238 5948.652 - 5973.858: 0.3696% ( 13) 00:06:55.238 5973.858 - 5999.065: 0.4224% ( 7) 00:06:55.238 5999.065 - 6024.271: 0.5204% ( 13) 00:06:55.238 6024.271 - 6049.477: 0.7014% ( 24) 00:06:55.238 6049.477 - 6074.683: 0.8221% ( 16) 00:06:55.238 6074.683 - 6099.889: 0.9578% ( 18) 00:06:55.238 6099.889 - 6125.095: 1.1313% ( 23) 00:06:55.238 6125.095 - 6150.302: 1.2520% ( 16) 00:06:55.238 6150.302 - 6175.508: 1.4028% ( 20) 00:06:55.238 6175.508 - 6200.714: 1.5914% ( 25) 00:06:55.238 6200.714 - 6225.920: 1.8403% ( 33) 00:06:55.238 6225.920 - 6251.126: 2.1042% ( 35) 00:06:55.238 6251.126 - 6276.332: 2.4587% ( 47) 00:06:55.238 6276.332 - 6301.538: 2.7378% ( 37) 00:06:55.238 6301.538 - 6326.745: 3.1375% ( 53) 00:06:55.238 6326.745 - 6351.951: 3.9973% ( 114) 00:06:55.238 6351.951 - 6377.157: 4.4875% ( 65) 00:06:55.238 6377.157 - 6402.363: 5.0833% ( 79) 00:06:55.238 6402.363 - 6427.569: 5.8753% ( 105) 00:06:55.238 6427.569 - 6452.775: 6.6521% ( 103) 00:06:55.238 6452.775 - 6503.188: 8.3490% ( 225) 00:06:55.238 6503.188 - 6553.600: 11.0189% ( 354) 00:06:55.238 6553.600 - 6604.012: 14.7221% ( 491) 00:06:55.238 6604.012 - 6654.425: 18.6892% ( 526) 00:06:55.238 6654.425 - 6704.837: 23.8329% ( 682) 00:06:55.238 6704.837 - 6755.249: 28.9313% ( 676) 00:06:55.238 6755.249 - 6805.662: 34.5275% ( 742) 00:06:55.238 6805.662 - 6856.074: 38.9698% ( 589) 00:06:55.238 6856.074 - 6906.486: 44.2567% ( 701) 00:06:55.238 6906.486 - 6956.898: 49.2873% ( 667) 00:06:55.238 6956.898 - 7007.311: 54.5139% ( 693) 00:06:55.238 7007.311 - 7057.723: 58.7525% ( 562) 00:06:55.238 7057.723 - 7108.135: 62.0786% ( 441) 00:06:55.238 7108.135 - 7158.548: 64.9823% ( 385) 00:06:55.238 7158.548 - 7208.960: 67.6446% ( 353) 00:06:55.238 7208.960 - 7259.372: 69.4019% ( 233) 00:06:55.238 7259.372 - 7309.785: 70.8123% ( 187) 00:06:55.238 7309.785 - 7360.197: 71.9738% ( 154) 00:06:55.238 7360.197 - 7410.609: 72.5620% ( 78) 00:06:55.238 7410.609 - 7461.022: 72.9995% ( 58) 00:06:55.238 7461.022 - 7511.434: 73.6481% ( 86) 00:06:55.238 7511.434 - 7561.846: 74.2515% ( 80) 00:06:55.238 7561.846 - 7612.258: 75.1037% ( 113) 00:06:55.239 7612.258 - 7662.671: 75.9032% ( 106) 00:06:55.239 7662.671 - 7713.083: 76.7328% ( 110) 00:06:55.239 7713.083 - 7763.495: 77.2155% ( 64) 00:06:55.239 7763.495 - 7813.908: 77.7736% ( 74) 00:06:55.239 7813.908 - 7864.320: 78.3468% ( 76) 00:06:55.239 7864.320 - 7914.732: 78.9728% ( 83) 00:06:55.239 7914.732 - 7965.145: 79.5837% ( 81) 00:06:55.239 7965.145 - 8015.557: 80.1493% ( 75) 00:06:55.239 8015.557 - 8065.969: 80.8281% ( 90) 00:06:55.239 8065.969 - 8116.382: 81.6804% ( 113) 00:06:55.239 8116.382 - 8166.794: 82.5025% ( 109) 00:06:55.239 8166.794 - 8217.206: 83.1284% ( 83) 00:06:55.239 8217.206 - 8267.618: 83.7469% ( 82) 00:06:55.239 8267.618 - 8318.031: 84.7198% ( 129) 00:06:55.239 8318.031 - 8368.443: 85.3684% ( 86) 00:06:55.239 8368.443 - 8418.855: 85.8285% ( 61) 00:06:55.239 8418.855 - 8469.268: 86.3036% ( 63) 00:06:55.239 8469.268 - 8519.680: 86.8919% ( 78) 00:06:55.239 8519.680 - 8570.092: 87.5481% ( 87) 00:06:55.239 8570.092 - 8620.505: 88.4003% ( 113) 00:06:55.239 8620.505 - 8670.917: 88.9358% ( 71) 00:06:55.239 8670.917 - 8721.329: 89.5241% ( 78) 00:06:55.239 8721.329 - 8771.742: 89.9540% ( 57) 00:06:55.239 8771.742 - 8822.154: 90.3085% ( 47) 00:06:55.239 8822.154 - 8872.566: 90.6403% ( 44) 00:06:55.239 8872.566 - 8922.978: 90.9118% ( 36) 00:06:55.239 8922.978 - 8973.391: 91.0778% ( 22) 00:06:55.239 8973.391 - 9023.803: 91.2965% ( 29) 00:06:55.239 9023.803 - 9074.215: 91.5906% ( 39) 00:06:55.239 9074.215 - 9124.628: 91.8395% ( 33) 00:06:55.239 9124.628 - 9175.040: 92.0658% ( 30) 00:06:55.239 9175.040 - 9225.452: 92.3825% ( 42) 00:06:55.239 9225.452 - 9275.865: 92.5032% ( 16) 00:06:55.239 9275.865 - 9326.277: 92.6239% ( 16) 00:06:55.239 9326.277 - 9376.689: 92.7219% ( 13) 00:06:55.239 9376.689 - 9427.102: 92.7898% ( 9) 00:06:55.239 9427.102 - 9477.514: 92.8426% ( 7) 00:06:55.239 9477.514 - 9527.926: 92.8803% ( 5) 00:06:55.239 9527.926 - 9578.338: 92.9256% ( 6) 00:06:55.239 9578.338 - 9628.751: 92.9784% ( 7) 00:06:55.239 9628.751 - 9679.163: 93.1367% ( 21) 00:06:55.239 9679.163 - 9729.575: 93.3253% ( 25) 00:06:55.239 9729.575 - 9779.988: 93.5063% ( 24) 00:06:55.239 9779.988 - 9830.400: 93.6496% ( 19) 00:06:55.239 9830.400 - 9880.812: 93.8004% ( 20) 00:06:55.239 9880.812 - 9931.225: 93.8306% ( 4) 00:06:55.239 9931.225 - 9981.637: 93.8608% ( 4) 00:06:55.239 9981.637 - 10032.049: 93.8834% ( 3) 00:06:55.239 10032.049 - 10082.462: 93.9211% ( 5) 00:06:55.239 10082.462 - 10132.874: 93.9513% ( 4) 00:06:55.239 10132.874 - 10183.286: 93.9965% ( 6) 00:06:55.239 10183.286 - 10233.698: 94.0342% ( 5) 00:06:55.239 10233.698 - 10284.111: 94.0644% ( 4) 00:06:55.239 10284.111 - 10334.523: 94.1021% ( 5) 00:06:55.239 10334.523 - 10384.935: 94.1851% ( 11) 00:06:55.239 10384.935 - 10435.348: 94.2530% ( 9) 00:06:55.239 10435.348 - 10485.760: 94.3435% ( 12) 00:06:55.239 10485.760 - 10536.172: 94.4189% ( 10) 00:06:55.239 10536.172 - 10586.585: 94.4792% ( 8) 00:06:55.239 10586.585 - 10636.997: 94.5320% ( 7) 00:06:55.239 10636.997 - 10687.409: 94.5773% ( 6) 00:06:55.239 10687.409 - 10737.822: 94.6301% ( 7) 00:06:55.239 10737.822 - 10788.234: 94.6753% ( 6) 00:06:55.239 10788.234 - 10838.646: 94.7281% ( 7) 00:06:55.239 10838.646 - 10889.058: 94.7734% ( 6) 00:06:55.239 10889.058 - 10939.471: 94.8262% ( 7) 00:06:55.239 10939.471 - 10989.883: 94.8865% ( 8) 00:06:55.239 10989.883 - 11040.295: 94.9921% ( 14) 00:06:55.239 11040.295 - 11090.708: 95.0675% ( 10) 00:06:55.239 11090.708 - 11141.120: 95.1429% ( 10) 00:06:55.239 11141.120 - 11191.532: 95.2108% ( 9) 00:06:55.239 11191.532 - 11241.945: 95.3164% ( 14) 00:06:55.239 11241.945 - 11292.357: 95.3767% ( 8) 00:06:55.239 11292.357 - 11342.769: 95.4295% ( 7) 00:06:55.239 11342.769 - 11393.182: 95.4748% ( 6) 00:06:55.239 11393.182 - 11443.594: 95.5200% ( 6) 00:06:55.239 11443.594 - 11494.006: 95.5577% ( 5) 00:06:55.239 11494.006 - 11544.418: 95.5728% ( 2) 00:06:55.239 11544.418 - 11594.831: 95.5879% ( 2) 00:06:55.239 11594.831 - 11645.243: 95.6030% ( 2) 00:06:55.239 11645.243 - 11695.655: 95.6332% ( 4) 00:06:55.239 11695.655 - 11746.068: 95.6558% ( 3) 00:06:55.239 12199.778 - 12250.191: 95.6859% ( 4) 00:06:55.239 12250.191 - 12300.603: 95.7387% ( 7) 00:06:55.239 12300.603 - 12351.015: 95.7840% ( 6) 00:06:55.239 12351.015 - 12401.428: 95.8066% ( 3) 00:06:55.239 12401.428 - 12451.840: 95.8292% ( 3) 00:06:55.239 12451.840 - 12502.252: 95.8443% ( 2) 00:06:55.239 12502.252 - 12552.665: 95.8670% ( 3) 00:06:55.239 12552.665 - 12603.077: 95.8820% ( 2) 00:06:55.239 12603.077 - 12653.489: 95.9047% ( 3) 00:06:55.239 12653.489 - 12703.902: 95.9273% ( 3) 00:06:55.239 12703.902 - 12754.314: 95.9499% ( 3) 00:06:55.239 12754.314 - 12804.726: 95.9725% ( 3) 00:06:55.239 12804.726 - 12855.138: 95.9952% ( 3) 00:06:55.239 12855.138 - 12905.551: 96.0178% ( 3) 00:06:55.239 12905.551 - 13006.375: 96.0555% ( 5) 00:06:55.239 13006.375 - 13107.200: 96.1008% ( 6) 00:06:55.239 13107.200 - 13208.025: 96.1385% ( 5) 00:06:55.239 15022.868 - 15123.692: 96.1460% ( 1) 00:06:55.239 15123.692 - 15224.517: 96.1536% ( 1) 00:06:55.239 15224.517 - 15325.342: 96.2064% ( 7) 00:06:55.239 15325.342 - 15426.166: 96.2969% ( 12) 00:06:55.239 15426.166 - 15526.991: 96.4175% ( 16) 00:06:55.239 15526.991 - 15627.815: 96.5910% ( 23) 00:06:55.239 15627.815 - 15728.640: 96.7192% ( 17) 00:06:55.239 15728.640 - 15829.465: 96.8248% ( 14) 00:06:55.239 15829.465 - 15930.289: 96.9530% ( 17) 00:06:55.239 15930.289 - 16031.114: 96.9907% ( 5) 00:06:55.239 16031.114 - 16131.938: 97.0360% ( 6) 00:06:55.239 16131.938 - 16232.763: 97.0737% ( 5) 00:06:55.239 16232.763 - 16333.588: 97.1039% ( 4) 00:06:55.239 25004.505 - 25105.329: 97.1265% ( 3) 00:06:55.239 25105.329 - 25206.154: 97.1566% ( 4) 00:06:55.239 25206.154 - 25306.978: 97.1868% ( 4) 00:06:55.239 25306.978 - 25407.803: 97.2170% ( 4) 00:06:55.239 25407.803 - 25508.628: 97.2472% ( 4) 00:06:55.239 25508.628 - 25609.452: 97.2773% ( 4) 00:06:55.239 25609.452 - 25710.277: 97.3150% ( 5) 00:06:55.239 25710.277 - 25811.102: 97.3452% ( 4) 00:06:55.239 25811.102 - 26012.751: 97.4055% ( 8) 00:06:55.239 26012.751 - 26214.400: 97.4734% ( 9) 00:06:55.239 26214.400 - 26416.049: 97.5338% ( 8) 00:06:55.239 26416.049 - 26617.698: 97.5865% ( 7) 00:06:55.239 29844.086 - 30045.735: 97.6092% ( 3) 00:06:55.239 30045.735 - 30247.385: 97.6770% ( 9) 00:06:55.239 30247.385 - 30449.034: 97.7298% ( 7) 00:06:55.239 30449.034 - 30650.683: 97.7902% ( 8) 00:06:55.239 30650.683 - 30852.332: 97.8505% ( 8) 00:06:55.239 30852.332 - 31053.982: 97.9184% ( 9) 00:06:55.239 31053.982 - 31255.631: 97.9712% ( 7) 00:06:55.239 31255.631 - 31457.280: 98.0391% ( 9) 00:06:55.239 31457.280 - 31658.929: 98.0692% ( 4) 00:06:55.239 146800.640 - 147607.237: 98.2729% ( 27) 00:06:55.239 150027.028 - 150833.625: 98.3558% ( 11) 00:06:55.239 150833.625 - 151640.222: 98.3709% ( 2) 00:06:55.239 151640.222 - 152446.818: 98.5293% ( 21) 00:06:55.239 152446.818 - 153253.415: 98.8084% ( 37) 00:06:55.239 158092.997 - 158899.594: 98.9290% ( 16) 00:06:55.239 161319.385 - 162125.982: 98.9894% ( 8) 00:06:55.239 162125.982 - 162932.578: 99.0346% ( 6) 00:06:55.239 211328.394 - 212941.588: 99.2533% ( 29) 00:06:55.239 214554.782 - 216167.975: 99.5173% ( 35) 00:06:55.239 219394.363 - 221007.557: 99.7813% ( 35) 00:06:55.239 222620.751 - 224233.945: 100.0000% ( 29) 00:06:55.239 00:06:55.239 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:55.239 ============================================================================== 00:06:55.239 Range in us Cumulative IO count 00:06:55.239 5394.117 - 5419.323: 0.0075% ( 1) 00:06:55.239 5419.323 - 5444.529: 0.0226% ( 2) 00:06:55.239 5444.529 - 5469.735: 0.0453% ( 3) 00:06:55.239 5469.735 - 5494.942: 0.0679% ( 3) 00:06:55.239 5494.942 - 5520.148: 0.0830% ( 2) 00:06:55.239 5520.148 - 5545.354: 0.1057% ( 3) 00:06:55.239 5545.354 - 5570.560: 0.1359% ( 4) 00:06:55.239 5570.560 - 5595.766: 0.1510% ( 2) 00:06:55.239 5595.766 - 5620.972: 0.2189% ( 9) 00:06:55.239 5620.972 - 5646.178: 0.3548% ( 18) 00:06:55.239 5646.178 - 5671.385: 0.3623% ( 1) 00:06:55.239 5671.385 - 5696.591: 0.3774% ( 2) 00:06:55.239 5696.591 - 5721.797: 0.3850% ( 1) 00:06:55.239 5721.797 - 5747.003: 0.3925% ( 1) 00:06:55.239 5747.003 - 5772.209: 0.4076% ( 2) 00:06:55.239 5772.209 - 5797.415: 0.4227% ( 2) 00:06:55.239 5797.415 - 5822.622: 0.4454% ( 3) 00:06:55.239 5822.622 - 5847.828: 0.4604% ( 2) 00:06:55.239 5847.828 - 5873.034: 0.5133% ( 7) 00:06:55.239 5873.034 - 5898.240: 0.5510% ( 5) 00:06:55.239 5898.240 - 5923.446: 0.6114% ( 8) 00:06:55.239 5923.446 - 5948.652: 0.6718% ( 8) 00:06:55.239 5948.652 - 5973.858: 0.7322% ( 8) 00:06:55.239 5973.858 - 5999.065: 0.8001% ( 9) 00:06:55.239 5999.065 - 6024.271: 0.8756% ( 10) 00:06:55.239 6024.271 - 6049.477: 1.1021% ( 30) 00:06:55.239 6049.477 - 6074.683: 1.2228% ( 16) 00:06:55.239 6074.683 - 6099.889: 1.4719% ( 33) 00:06:55.239 6099.889 - 6125.095: 1.7286% ( 34) 00:06:55.239 6125.095 - 6150.302: 1.9928% ( 35) 00:06:55.239 6150.302 - 6175.508: 2.2418% ( 33) 00:06:55.239 6175.508 - 6200.714: 2.5136% ( 36) 00:06:55.239 6200.714 - 6225.920: 2.7702% ( 34) 00:06:55.239 6225.920 - 6251.126: 3.1250% ( 47) 00:06:55.239 6251.126 - 6276.332: 3.3967% ( 36) 00:06:55.239 6276.332 - 6301.538: 3.9100% ( 68) 00:06:55.239 6301.538 - 6326.745: 4.7479% ( 111) 00:06:55.240 6326.745 - 6351.951: 5.3518% ( 80) 00:06:55.240 6351.951 - 6377.157: 5.8801% ( 70) 00:06:55.240 6377.157 - 6402.363: 6.3406% ( 61) 00:06:55.240 6402.363 - 6427.569: 6.8463% ( 67) 00:06:55.240 6427.569 - 6452.775: 7.3822% ( 71) 00:06:55.240 6452.775 - 6503.188: 8.9900% ( 213) 00:06:55.240 6503.188 - 6553.600: 11.4583% ( 327) 00:06:55.240 6553.600 - 6604.012: 14.8928% ( 455) 00:06:55.240 6604.012 - 6654.425: 19.0670% ( 553) 00:06:55.240 6654.425 - 6704.837: 23.9810% ( 651) 00:06:55.240 6704.837 - 6755.249: 29.1591% ( 686) 00:06:55.240 6755.249 - 6805.662: 34.8958% ( 760) 00:06:55.240 6805.662 - 6856.074: 40.5495% ( 749) 00:06:55.240 6856.074 - 6906.486: 45.7428% ( 688) 00:06:55.240 6906.486 - 6956.898: 49.9774% ( 561) 00:06:55.240 6956.898 - 7007.311: 54.1667% ( 555) 00:06:55.240 7007.311 - 7057.723: 57.8955% ( 494) 00:06:55.240 7057.723 - 7108.135: 61.4281% ( 468) 00:06:55.240 7108.135 - 7158.548: 65.4136% ( 528) 00:06:55.240 7158.548 - 7208.960: 67.5725% ( 286) 00:06:55.240 7208.960 - 7259.372: 69.2406% ( 221) 00:06:55.240 7259.372 - 7309.785: 70.4182% ( 156) 00:06:55.240 7309.785 - 7360.197: 71.4976% ( 143) 00:06:55.240 7360.197 - 7410.609: 72.4562% ( 127) 00:06:55.240 7410.609 - 7461.022: 73.2790% ( 109) 00:06:55.240 7461.022 - 7511.434: 73.9055% ( 83) 00:06:55.240 7511.434 - 7561.846: 74.5094% ( 80) 00:06:55.240 7561.846 - 7612.258: 75.1812% ( 89) 00:06:55.240 7612.258 - 7662.671: 75.9284% ( 99) 00:06:55.240 7662.671 - 7713.083: 76.5927% ( 88) 00:06:55.240 7713.083 - 7763.495: 77.1890% ( 79) 00:06:55.240 7763.495 - 7813.908: 77.8986% ( 94) 00:06:55.240 7813.908 - 7864.320: 78.5628% ( 88) 00:06:55.240 7864.320 - 7914.732: 79.2950% ( 97) 00:06:55.240 7914.732 - 7965.145: 79.8385% ( 72) 00:06:55.240 7965.145 - 8015.557: 80.5933% ( 100) 00:06:55.240 8015.557 - 8065.969: 81.2123% ( 82) 00:06:55.240 8065.969 - 8116.382: 81.8161% ( 80) 00:06:55.240 8116.382 - 8166.794: 82.5408% ( 96) 00:06:55.240 8166.794 - 8217.206: 83.0314% ( 65) 00:06:55.240 8217.206 - 8267.618: 83.6730% ( 85) 00:06:55.240 8267.618 - 8318.031: 84.6618% ( 131) 00:06:55.240 8318.031 - 8368.443: 85.3865% ( 96) 00:06:55.240 8368.443 - 8418.855: 86.0658% ( 90) 00:06:55.240 8418.855 - 8469.268: 86.6772% ( 81) 00:06:55.240 8469.268 - 8519.680: 87.0999% ( 56) 00:06:55.240 8519.680 - 8570.092: 87.6887% ( 78) 00:06:55.240 8570.092 - 8620.505: 88.0208% ( 44) 00:06:55.240 8620.505 - 8670.917: 88.3152% ( 39) 00:06:55.240 8670.917 - 8721.329: 88.6398% ( 43) 00:06:55.240 8721.329 - 8771.742: 89.2210% ( 77) 00:06:55.240 8771.742 - 8822.154: 89.6362% ( 55) 00:06:55.240 8822.154 - 8872.566: 90.1042% ( 62) 00:06:55.240 8872.566 - 8922.978: 90.4665% ( 48) 00:06:55.240 8922.978 - 8973.391: 90.6929% ( 30) 00:06:55.240 8973.391 - 9023.803: 91.0553% ( 48) 00:06:55.240 9023.803 - 9074.215: 91.3647% ( 41) 00:06:55.240 9074.215 - 9124.628: 91.5685% ( 27) 00:06:55.240 9124.628 - 9175.040: 91.8025% ( 31) 00:06:55.240 9175.040 - 9225.452: 92.0214% ( 29) 00:06:55.240 9225.452 - 9275.865: 92.2026% ( 24) 00:06:55.240 9275.865 - 9326.277: 92.3838% ( 24) 00:06:55.240 9326.277 - 9376.689: 92.5649% ( 24) 00:06:55.240 9376.689 - 9427.102: 92.7687% ( 27) 00:06:55.240 9427.102 - 9477.514: 93.0556% ( 38) 00:06:55.240 9477.514 - 9527.926: 93.1537% ( 13) 00:06:55.240 9527.926 - 9578.338: 93.2367% ( 11) 00:06:55.240 9578.338 - 9628.751: 93.3575% ( 16) 00:06:55.240 9628.751 - 9679.163: 93.4858% ( 17) 00:06:55.240 9679.163 - 9729.575: 93.6292% ( 19) 00:06:55.240 9729.575 - 9779.988: 93.7877% ( 21) 00:06:55.240 9779.988 - 9830.400: 93.8708% ( 11) 00:06:55.240 9830.400 - 9880.812: 93.9312% ( 8) 00:06:55.240 9880.812 - 9931.225: 93.9840% ( 7) 00:06:55.240 9931.225 - 9981.637: 94.0217% ( 5) 00:06:55.240 9981.637 - 10032.049: 94.0746% ( 7) 00:06:55.240 10032.049 - 10082.462: 94.1123% ( 5) 00:06:55.240 10082.462 - 10132.874: 94.1350% ( 3) 00:06:55.240 10132.874 - 10183.286: 94.1501% ( 2) 00:06:55.240 10183.286 - 10233.698: 94.1652% ( 2) 00:06:55.240 10233.698 - 10284.111: 94.1878% ( 3) 00:06:55.240 10284.111 - 10334.523: 94.2029% ( 2) 00:06:55.240 10334.523 - 10384.935: 94.2331% ( 4) 00:06:55.240 10384.935 - 10435.348: 94.2708% ( 5) 00:06:55.240 10435.348 - 10485.760: 94.3312% ( 8) 00:06:55.240 10485.760 - 10536.172: 94.3916% ( 8) 00:06:55.240 10536.172 - 10586.585: 94.4973% ( 14) 00:06:55.240 10586.585 - 10636.997: 94.6105% ( 15) 00:06:55.240 10636.997 - 10687.409: 94.7313% ( 16) 00:06:55.240 10687.409 - 10737.822: 94.7992% ( 9) 00:06:55.240 10737.822 - 10788.234: 94.8671% ( 9) 00:06:55.240 10788.234 - 10838.646: 94.9879% ( 16) 00:06:55.240 10838.646 - 10889.058: 95.1011% ( 15) 00:06:55.240 10889.058 - 10939.471: 95.1917% ( 12) 00:06:55.240 10939.471 - 10989.883: 95.2823% ( 12) 00:06:55.240 10989.883 - 11040.295: 95.3804% ( 13) 00:06:55.240 11040.295 - 11090.708: 95.4710% ( 12) 00:06:55.240 11090.708 - 11141.120: 95.5691% ( 13) 00:06:55.240 11141.120 - 11191.532: 95.6673% ( 13) 00:06:55.240 11191.532 - 11241.945: 95.7654% ( 13) 00:06:55.240 11241.945 - 11292.357: 95.8635% ( 13) 00:06:55.240 11292.357 - 11342.769: 95.9315% ( 9) 00:06:55.240 11342.769 - 11393.182: 95.9843% ( 7) 00:06:55.240 11393.182 - 11443.594: 96.0296% ( 6) 00:06:55.240 11443.594 - 11494.006: 96.0598% ( 4) 00:06:55.240 11494.006 - 11544.418: 96.0824% ( 3) 00:06:55.240 11544.418 - 11594.831: 96.1126% ( 4) 00:06:55.240 11594.831 - 11645.243: 96.1353% ( 3) 00:06:55.240 14821.218 - 14922.043: 96.1957% ( 8) 00:06:55.240 14922.043 - 15022.868: 96.2258% ( 4) 00:06:55.240 15022.868 - 15123.692: 96.3617% ( 18) 00:06:55.240 15123.692 - 15224.517: 96.4674% ( 14) 00:06:55.240 15224.517 - 15325.342: 96.5051% ( 5) 00:06:55.240 15325.342 - 15426.166: 96.5429% ( 5) 00:06:55.240 15426.166 - 15526.991: 96.5731% ( 4) 00:06:55.240 15526.991 - 15627.815: 96.6108% ( 5) 00:06:55.240 15627.815 - 15728.640: 96.6184% ( 1) 00:06:55.240 16131.938 - 16232.763: 96.6410% ( 3) 00:06:55.240 16232.763 - 16333.588: 96.7240% ( 11) 00:06:55.240 16333.588 - 16434.412: 96.7844% ( 8) 00:06:55.240 16434.412 - 16535.237: 96.8222% ( 5) 00:06:55.240 16535.237 - 16636.062: 96.8675% ( 6) 00:06:55.240 16636.062 - 16736.886: 96.9127% ( 6) 00:06:55.240 16736.886 - 16837.711: 96.9656% ( 7) 00:06:55.240 16837.711 - 16938.535: 97.0109% ( 6) 00:06:55.240 16938.535 - 17039.360: 97.0562% ( 6) 00:06:55.240 17039.360 - 17140.185: 97.1014% ( 6) 00:06:55.240 23895.434 - 23996.258: 97.1165% ( 2) 00:06:55.240 23996.258 - 24097.083: 97.1543% ( 5) 00:06:55.240 24097.083 - 24197.908: 97.1845% ( 4) 00:06:55.240 24197.908 - 24298.732: 97.2147% ( 4) 00:06:55.240 24298.732 - 24399.557: 97.2449% ( 4) 00:06:55.240 24399.557 - 24500.382: 97.2675% ( 3) 00:06:55.240 24500.382 - 24601.206: 97.3053% ( 5) 00:06:55.240 24601.206 - 24702.031: 97.3354% ( 4) 00:06:55.240 24702.031 - 24802.855: 97.3656% ( 4) 00:06:55.240 24802.855 - 24903.680: 97.3958% ( 4) 00:06:55.240 24903.680 - 25004.505: 97.4260% ( 4) 00:06:55.240 25004.505 - 25105.329: 97.4638% ( 5) 00:06:55.240 25105.329 - 25206.154: 97.4940% ( 4) 00:06:55.240 25206.154 - 25306.978: 97.5242% ( 4) 00:06:55.240 25306.978 - 25407.803: 97.5543% ( 4) 00:06:55.240 25407.803 - 25508.628: 97.5845% ( 4) 00:06:55.240 28835.840 - 29037.489: 97.6298% ( 6) 00:06:55.240 29037.489 - 29239.138: 97.6827% ( 7) 00:06:55.240 29239.138 - 29440.788: 97.7431% ( 8) 00:06:55.240 29440.788 - 29642.437: 97.8110% ( 9) 00:06:55.240 29642.437 - 29844.086: 97.8714% ( 8) 00:06:55.240 29844.086 - 30045.735: 97.9318% ( 8) 00:06:55.240 30045.735 - 30247.385: 97.9921% ( 8) 00:06:55.240 30247.385 - 30449.034: 98.0525% ( 8) 00:06:55.240 30449.034 - 30650.683: 98.0676% ( 2) 00:06:55.240 152446.818 - 153253.415: 98.5281% ( 61) 00:06:55.240 153253.415 - 154060.012: 99.0338% ( 67) 00:06:55.240 212941.588 - 214554.782: 99.2678% ( 31) 00:06:55.240 216167.975 - 217781.169: 99.5169% ( 33) 00:06:55.240 221007.557 - 222620.751: 99.7660% ( 33) 00:06:55.240 224233.945 - 225847.138: 100.0000% ( 31) 00:06:55.240 00:06:55.240 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:55.240 ============================================================================== 00:06:55.240 Range in us Cumulative IO count 00:06:55.240 5242.880 - 5268.086: 0.0075% ( 1) 00:06:55.240 5343.705 - 5368.911: 0.0226% ( 2) 00:06:55.240 5368.911 - 5394.117: 0.0377% ( 2) 00:06:55.240 5394.117 - 5419.323: 0.0528% ( 2) 00:06:55.240 5419.323 - 5444.529: 0.0679% ( 2) 00:06:55.240 5444.529 - 5469.735: 0.0906% ( 3) 00:06:55.240 5469.735 - 5494.942: 0.1057% ( 2) 00:06:55.240 5494.942 - 5520.148: 0.1283% ( 3) 00:06:55.240 5520.148 - 5545.354: 0.1510% ( 3) 00:06:55.240 5545.354 - 5570.560: 0.3397% ( 25) 00:06:55.240 5570.560 - 5595.766: 0.3623% ( 3) 00:06:55.240 5595.766 - 5620.972: 0.3774% ( 2) 00:06:55.240 5620.972 - 5646.178: 0.3925% ( 2) 00:06:55.240 5646.178 - 5671.385: 0.4001% ( 1) 00:06:55.240 5671.385 - 5696.591: 0.4152% ( 2) 00:06:55.240 5696.591 - 5721.797: 0.4303% ( 2) 00:06:55.240 5721.797 - 5747.003: 0.4378% ( 1) 00:06:55.240 5747.003 - 5772.209: 0.4529% ( 2) 00:06:55.240 5772.209 - 5797.415: 0.4680% ( 2) 00:06:55.240 5797.415 - 5822.622: 0.4906% ( 3) 00:06:55.240 5822.622 - 5847.828: 0.4982% ( 1) 00:06:55.240 5847.828 - 5873.034: 0.5359% ( 5) 00:06:55.240 5873.034 - 5898.240: 0.5963% ( 8) 00:06:55.240 5898.240 - 5923.446: 0.6416% ( 6) 00:06:55.241 5923.446 - 5948.652: 0.7095% ( 9) 00:06:55.241 5948.652 - 5973.858: 0.7473% ( 5) 00:06:55.241 5973.858 - 5999.065: 0.8001% ( 7) 00:06:55.241 5999.065 - 6024.271: 0.8530% ( 7) 00:06:55.241 6024.271 - 6049.477: 0.9888% ( 18) 00:06:55.241 6049.477 - 6074.683: 1.1398% ( 20) 00:06:55.241 6074.683 - 6099.889: 1.3436% ( 27) 00:06:55.241 6099.889 - 6125.095: 1.6833% ( 45) 00:06:55.241 6125.095 - 6150.302: 1.9324% ( 33) 00:06:55.241 6150.302 - 6175.508: 2.1966% ( 35) 00:06:55.241 6175.508 - 6200.714: 2.4758% ( 37) 00:06:55.241 6200.714 - 6225.920: 2.8910% ( 55) 00:06:55.241 6225.920 - 6251.126: 3.2911% ( 53) 00:06:55.241 6251.126 - 6276.332: 3.6685% ( 50) 00:06:55.241 6276.332 - 6301.538: 4.0232% ( 47) 00:06:55.241 6301.538 - 6326.745: 4.8083% ( 104) 00:06:55.241 6326.745 - 6351.951: 5.3367% ( 70) 00:06:55.241 6351.951 - 6377.157: 5.8952% ( 74) 00:06:55.241 6377.157 - 6402.363: 6.4010% ( 67) 00:06:55.241 6402.363 - 6427.569: 6.9218% ( 69) 00:06:55.241 6427.569 - 6452.775: 7.6087% ( 91) 00:06:55.241 6452.775 - 6503.188: 9.5184% ( 253) 00:06:55.241 6503.188 - 6553.600: 12.1075% ( 343) 00:06:55.241 6553.600 - 6604.012: 15.2929% ( 422) 00:06:55.241 6604.012 - 6654.425: 19.2859% ( 529) 00:06:55.241 6654.425 - 6704.837: 23.9659% ( 620) 00:06:55.241 6704.837 - 6755.249: 28.6760% ( 624) 00:06:55.241 6755.249 - 6805.662: 33.9749% ( 702) 00:06:55.241 6805.662 - 6856.074: 40.7080% ( 892) 00:06:55.241 6856.074 - 6906.486: 45.1313% ( 586) 00:06:55.241 6906.486 - 6956.898: 49.5245% ( 582) 00:06:55.241 6956.898 - 7007.311: 53.9478% ( 586) 00:06:55.241 7007.311 - 7057.723: 57.9786% ( 534) 00:06:55.241 7057.723 - 7108.135: 61.5791% ( 477) 00:06:55.241 7108.135 - 7158.548: 64.4777% ( 384) 00:06:55.241 7158.548 - 7208.960: 66.7572% ( 302) 00:06:55.241 7208.960 - 7259.372: 68.8783% ( 281) 00:06:55.241 7259.372 - 7309.785: 70.4031% ( 202) 00:06:55.241 7309.785 - 7360.197: 71.4674% ( 141) 00:06:55.241 7360.197 - 7410.609: 72.3958% ( 123) 00:06:55.241 7410.609 - 7461.022: 73.0978% ( 93) 00:06:55.241 7461.022 - 7511.434: 73.9810% ( 117) 00:06:55.241 7511.434 - 7561.846: 74.8415% ( 114) 00:06:55.241 7561.846 - 7612.258: 75.5057% ( 88) 00:06:55.241 7612.258 - 7662.671: 76.1926% ( 91) 00:06:55.241 7662.671 - 7713.083: 76.5776% ( 51) 00:06:55.241 7713.083 - 7763.495: 76.9097% ( 44) 00:06:55.241 7763.495 - 7813.908: 77.5438% ( 84) 00:06:55.241 7813.908 - 7864.320: 78.0042% ( 61) 00:06:55.241 7864.320 - 7914.732: 78.6081% ( 80) 00:06:55.241 7914.732 - 7965.145: 79.2271% ( 82) 00:06:55.241 7965.145 - 8015.557: 80.0574% ( 110) 00:06:55.241 8015.557 - 8065.969: 81.0688% ( 134) 00:06:55.241 8065.969 - 8116.382: 82.1105% ( 138) 00:06:55.241 8116.382 - 8166.794: 82.9484% ( 111) 00:06:55.241 8166.794 - 8217.206: 83.9221% ( 129) 00:06:55.241 8217.206 - 8267.618: 84.4807% ( 74) 00:06:55.241 8267.618 - 8318.031: 85.0694% ( 78) 00:06:55.241 8318.031 - 8368.443: 85.6884% ( 82) 00:06:55.241 8368.443 - 8418.855: 86.4206% ( 97) 00:06:55.241 8418.855 - 8469.268: 87.0773% ( 87) 00:06:55.241 8469.268 - 8519.680: 87.4396% ( 48) 00:06:55.241 8519.680 - 8570.092: 87.7566% ( 42) 00:06:55.241 8570.092 - 8620.505: 88.0510% ( 39) 00:06:55.241 8620.505 - 8670.917: 88.3228% ( 36) 00:06:55.241 8670.917 - 8721.329: 88.8587% ( 71) 00:06:55.241 8721.329 - 8771.742: 89.2889% ( 57) 00:06:55.241 8771.742 - 8822.154: 89.6588% ( 49) 00:06:55.241 8822.154 - 8872.566: 90.0136% ( 47) 00:06:55.241 8872.566 - 8922.978: 90.2400% ( 30) 00:06:55.241 8922.978 - 8973.391: 90.5118% ( 36) 00:06:55.241 8973.391 - 9023.803: 90.9496% ( 58) 00:06:55.241 9023.803 - 9074.215: 91.2213% ( 36) 00:06:55.241 9074.215 - 9124.628: 91.4931% ( 36) 00:06:55.241 9124.628 - 9175.040: 91.6893% ( 26) 00:06:55.241 9175.040 - 9225.452: 91.8252% ( 18) 00:06:55.241 9225.452 - 9275.865: 91.9686% ( 19) 00:06:55.241 9275.865 - 9326.277: 92.2630% ( 39) 00:06:55.241 9326.277 - 9376.689: 92.6479% ( 51) 00:06:55.241 9376.689 - 9427.102: 93.0027% ( 47) 00:06:55.241 9427.102 - 9477.514: 93.1386% ( 18) 00:06:55.241 9477.514 - 9527.926: 93.2594% ( 16) 00:06:55.241 9527.926 - 9578.338: 93.3801% ( 16) 00:06:55.241 9578.338 - 9628.751: 93.4707% ( 12) 00:06:55.241 9628.751 - 9679.163: 93.5688% ( 13) 00:06:55.241 9679.163 - 9729.575: 93.6594% ( 12) 00:06:55.241 9729.575 - 9779.988: 93.7274% ( 9) 00:06:55.241 9779.988 - 9830.400: 93.8104% ( 11) 00:06:55.241 9830.400 - 9880.812: 93.9764% ( 22) 00:06:55.241 9880.812 - 9931.225: 94.0293% ( 7) 00:06:55.241 9931.225 - 9981.637: 94.0595% ( 4) 00:06:55.241 9981.637 - 10032.049: 94.1048% ( 6) 00:06:55.241 10032.049 - 10082.462: 94.1727% ( 9) 00:06:55.241 10082.462 - 10132.874: 94.2406% ( 9) 00:06:55.241 10132.874 - 10183.286: 94.2859% ( 6) 00:06:55.241 10183.286 - 10233.698: 94.3010% ( 2) 00:06:55.241 10233.698 - 10284.111: 94.3463% ( 6) 00:06:55.241 10284.111 - 10334.523: 94.4369% ( 12) 00:06:55.241 10334.523 - 10384.935: 94.5728% ( 18) 00:06:55.241 10384.935 - 10435.348: 94.6332% ( 8) 00:06:55.241 10435.348 - 10485.760: 94.7388% ( 14) 00:06:55.241 10485.760 - 10536.172: 94.8370% ( 13) 00:06:55.241 10536.172 - 10586.585: 94.9275% ( 12) 00:06:55.241 10586.585 - 10636.997: 95.0257% ( 13) 00:06:55.241 10636.997 - 10687.409: 95.1313% ( 14) 00:06:55.241 10687.409 - 10737.822: 95.1917% ( 8) 00:06:55.241 10737.822 - 10788.234: 95.2974% ( 14) 00:06:55.241 10788.234 - 10838.646: 95.3653% ( 9) 00:06:55.241 10838.646 - 10889.058: 95.4408% ( 10) 00:06:55.241 10889.058 - 10939.471: 95.5088% ( 9) 00:06:55.241 10939.471 - 10989.883: 95.5691% ( 8) 00:06:55.241 10989.883 - 11040.295: 95.6446% ( 10) 00:06:55.241 11040.295 - 11090.708: 95.7201% ( 10) 00:06:55.241 11090.708 - 11141.120: 95.8031% ( 11) 00:06:55.241 11141.120 - 11191.532: 95.8786% ( 10) 00:06:55.241 11191.532 - 11241.945: 95.9390% ( 8) 00:06:55.241 11241.945 - 11292.357: 95.9768% ( 5) 00:06:55.241 11292.357 - 11342.769: 95.9994% ( 3) 00:06:55.241 11342.769 - 11393.182: 96.0296% ( 4) 00:06:55.241 11393.182 - 11443.594: 96.0522% ( 3) 00:06:55.241 11443.594 - 11494.006: 96.0749% ( 3) 00:06:55.241 11494.006 - 11544.418: 96.1051% ( 4) 00:06:55.241 11544.418 - 11594.831: 96.1277% ( 3) 00:06:55.241 11594.831 - 11645.243: 96.1353% ( 1) 00:06:55.241 14518.745 - 14619.569: 96.1428% ( 1) 00:06:55.241 14619.569 - 14720.394: 96.1881% ( 6) 00:06:55.241 14720.394 - 14821.218: 96.2409% ( 7) 00:06:55.241 14821.218 - 14922.043: 96.3013% ( 8) 00:06:55.241 14922.043 - 15022.868: 96.3693% ( 9) 00:06:55.241 15022.868 - 15123.692: 96.4146% ( 6) 00:06:55.241 15123.692 - 15224.517: 96.4523% ( 5) 00:06:55.241 15224.517 - 15325.342: 96.4976% ( 6) 00:06:55.241 15325.342 - 15426.166: 96.5429% ( 6) 00:06:55.241 15426.166 - 15526.991: 96.5882% ( 6) 00:06:55.241 15526.991 - 15627.815: 96.6184% ( 4) 00:06:55.241 16232.763 - 16333.588: 96.6712% ( 7) 00:06:55.241 16333.588 - 16434.412: 96.7391% ( 9) 00:06:55.241 16434.412 - 16535.237: 96.8222% ( 11) 00:06:55.241 16535.237 - 16636.062: 96.8599% ( 5) 00:06:55.241 16636.062 - 16736.886: 96.8976% ( 5) 00:06:55.241 16736.886 - 16837.711: 96.9354% ( 5) 00:06:55.241 16837.711 - 16938.535: 96.9882% ( 7) 00:06:55.241 16938.535 - 17039.360: 97.0335% ( 6) 00:06:55.241 17039.360 - 17140.185: 97.0788% ( 6) 00:06:55.241 17140.185 - 17241.009: 97.1014% ( 3) 00:06:55.241 22080.591 - 22181.415: 97.1241% ( 3) 00:06:55.241 22181.415 - 22282.240: 97.1618% ( 5) 00:06:55.241 22282.240 - 22383.065: 97.1920% ( 4) 00:06:55.241 22383.065 - 22483.889: 97.2222% ( 4) 00:06:55.241 22483.889 - 22584.714: 97.2524% ( 4) 00:06:55.241 22584.714 - 22685.538: 97.2826% ( 4) 00:06:55.241 22685.538 - 22786.363: 97.3128% ( 4) 00:06:55.241 22786.363 - 22887.188: 97.3430% ( 4) 00:06:55.241 22887.188 - 22988.012: 97.3732% ( 4) 00:06:55.241 22988.012 - 23088.837: 97.4034% ( 4) 00:06:55.241 23088.837 - 23189.662: 97.4336% ( 4) 00:06:55.241 23189.662 - 23290.486: 97.4638% ( 4) 00:06:55.242 23290.486 - 23391.311: 97.5015% ( 5) 00:06:55.242 23391.311 - 23492.135: 97.5317% ( 4) 00:06:55.242 23492.135 - 23592.960: 97.5619% ( 4) 00:06:55.242 23592.960 - 23693.785: 97.5845% ( 3) 00:06:55.242 27020.997 - 27222.646: 97.6072% ( 3) 00:06:55.242 27222.646 - 27424.295: 97.6676% ( 8) 00:06:55.242 27424.295 - 27625.945: 97.7280% ( 8) 00:06:55.242 27625.945 - 27827.594: 97.7883% ( 8) 00:06:55.242 27827.594 - 28029.243: 97.8336% ( 6) 00:06:55.242 28029.243 - 28230.892: 97.8940% ( 8) 00:06:55.242 28230.892 - 28432.542: 97.9469% ( 7) 00:06:55.242 28432.542 - 28634.191: 98.0072% ( 8) 00:06:55.242 28634.191 - 28835.840: 98.0676% ( 8) 00:06:55.242 152446.818 - 153253.415: 98.5507% ( 64) 00:06:55.242 153253.415 - 154060.012: 99.0338% ( 64) 00:06:55.242 212941.588 - 214554.782: 99.1999% ( 22) 00:06:55.242 216167.975 - 217781.169: 99.5169% ( 42) 00:06:55.242 221007.557 - 222620.751: 99.7358% ( 29) 00:06:55.242 222620.751 - 224233.945: 99.8339% ( 13) 00:06:55.242 224233.945 - 225847.138: 100.0000% ( 22) 00:06:55.242 00:06:55.242 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:55.242 ============================================================================== 00:06:55.242 Range in us Cumulative IO count 00:06:55.242 5318.498 - 5343.705: 0.0151% ( 2) 00:06:55.242 5343.705 - 5368.911: 0.0303% ( 2) 00:06:55.242 5368.911 - 5394.117: 0.0454% ( 2) 00:06:55.242 5394.117 - 5419.323: 0.0606% ( 2) 00:06:55.242 5419.323 - 5444.529: 0.0757% ( 2) 00:06:55.242 5444.529 - 5469.735: 0.0833% ( 1) 00:06:55.242 5469.735 - 5494.942: 0.0984% ( 2) 00:06:55.242 5494.942 - 5520.148: 0.1136% ( 2) 00:06:55.242 5520.148 - 5545.354: 0.1211% ( 1) 00:06:55.242 5545.354 - 5570.560: 0.1363% ( 2) 00:06:55.242 5570.560 - 5595.766: 0.1514% ( 2) 00:06:55.242 5595.766 - 5620.972: 0.1666% ( 2) 00:06:55.242 5620.972 - 5646.178: 0.1742% ( 1) 00:06:55.242 5721.797 - 5747.003: 0.1817% ( 1) 00:06:55.242 5822.622 - 5847.828: 0.1893% ( 1) 00:06:55.242 5873.034 - 5898.240: 0.2044% ( 2) 00:06:55.242 5898.240 - 5923.446: 0.2196% ( 2) 00:06:55.242 5923.446 - 5948.652: 0.2499% ( 4) 00:06:55.242 5948.652 - 5973.858: 0.2802% ( 4) 00:06:55.242 5973.858 - 5999.065: 0.3256% ( 6) 00:06:55.242 5999.065 - 6024.271: 0.4013% ( 10) 00:06:55.242 6024.271 - 6049.477: 0.4694% ( 9) 00:06:55.242 6049.477 - 6074.683: 0.5452% ( 10) 00:06:55.242 6074.683 - 6099.889: 0.7496% ( 27) 00:06:55.242 6099.889 - 6125.095: 0.9995% ( 33) 00:06:55.242 6125.095 - 6150.302: 1.2039% ( 27) 00:06:55.242 6150.302 - 6175.508: 1.4765% ( 36) 00:06:55.242 6175.508 - 6200.714: 1.9005% ( 56) 00:06:55.242 6200.714 - 6225.920: 2.3245% ( 56) 00:06:55.242 6225.920 - 6251.126: 2.6501% ( 43) 00:06:55.242 6251.126 - 6276.332: 2.9530% ( 40) 00:06:55.242 6276.332 - 6301.538: 3.5511% ( 79) 00:06:55.242 6301.538 - 6326.745: 4.1872% ( 84) 00:06:55.242 6326.745 - 6351.951: 4.8611% ( 89) 00:06:55.242 6351.951 - 6377.157: 5.6940% ( 110) 00:06:55.242 6377.157 - 6402.363: 6.2618% ( 75) 00:06:55.242 6402.363 - 6427.569: 7.0341% ( 102) 00:06:55.242 6427.569 - 6452.775: 7.8065% ( 102) 00:06:55.242 6452.775 - 6503.188: 9.5707% ( 233) 00:06:55.242 6503.188 - 6553.600: 12.0466% ( 327) 00:06:55.242 6553.600 - 6604.012: 15.3631% ( 438) 00:06:55.242 6604.012 - 6654.425: 19.1565% ( 501) 00:06:55.242 6654.425 - 6704.837: 23.6693% ( 596) 00:06:55.242 6704.837 - 6755.249: 28.8938% ( 690) 00:06:55.242 6755.249 - 6805.662: 34.0728% ( 684) 00:06:55.242 6805.662 - 6856.074: 39.7214% ( 746) 00:06:55.242 6856.074 - 6906.486: 44.6884% ( 656) 00:06:55.242 6906.486 - 6956.898: 50.1628% ( 723) 00:06:55.242 6956.898 - 7007.311: 54.4030% ( 560) 00:06:55.242 7007.311 - 7057.723: 58.8324% ( 585) 00:06:55.242 7057.723 - 7108.135: 62.1261% ( 435) 00:06:55.242 7108.135 - 7158.548: 64.8595% ( 361) 00:06:55.242 7158.548 - 7208.960: 66.8509% ( 263) 00:06:55.242 7208.960 - 7259.372: 68.3804% ( 202) 00:06:55.242 7259.372 - 7309.785: 70.1522% ( 234) 00:06:55.242 7309.785 - 7360.197: 71.0835% ( 123) 00:06:55.242 7360.197 - 7410.609: 72.0981% ( 134) 00:06:55.242 7410.609 - 7461.022: 73.1127% ( 134) 00:06:55.242 7461.022 - 7511.434: 73.6882% ( 76) 00:06:55.242 7511.434 - 7561.846: 74.1576% ( 62) 00:06:55.242 7561.846 - 7612.258: 74.8164% ( 87) 00:06:55.242 7612.258 - 7662.671: 75.3843% ( 75) 00:06:55.242 7662.671 - 7713.083: 76.1490% ( 101) 00:06:55.242 7713.083 - 7763.495: 77.2166% ( 141) 00:06:55.242 7763.495 - 7813.908: 78.0722% ( 113) 00:06:55.242 7813.908 - 7864.320: 78.7158% ( 85) 00:06:55.242 7864.320 - 7914.732: 79.3216% ( 80) 00:06:55.242 7914.732 - 7965.145: 80.0182% ( 92) 00:06:55.242 7965.145 - 8015.557: 80.7981% ( 103) 00:06:55.242 8015.557 - 8065.969: 81.4568% ( 87) 00:06:55.242 8065.969 - 8116.382: 82.3806% ( 122) 00:06:55.242 8116.382 - 8166.794: 83.1680% ( 104) 00:06:55.242 8166.794 - 8217.206: 83.7056% ( 71) 00:06:55.242 8217.206 - 8267.618: 84.5915% ( 117) 00:06:55.242 8267.618 - 8318.031: 85.3714% ( 103) 00:06:55.242 8318.031 - 8368.443: 85.9317% ( 74) 00:06:55.242 8368.443 - 8418.855: 86.3557% ( 56) 00:06:55.242 8418.855 - 8469.268: 86.9085% ( 73) 00:06:55.242 8469.268 - 8519.680: 87.2568% ( 46) 00:06:55.242 8519.680 - 8570.092: 87.5975% ( 45) 00:06:55.242 8570.092 - 8620.505: 87.8928% ( 39) 00:06:55.242 8620.505 - 8670.917: 88.0896% ( 26) 00:06:55.242 8670.917 - 8721.329: 88.3774% ( 38) 00:06:55.242 8721.329 - 8771.742: 88.6727% ( 39) 00:06:55.242 8771.742 - 8822.154: 88.9907% ( 42) 00:06:55.242 8822.154 - 8872.566: 89.5737% ( 77) 00:06:55.242 8872.566 - 8922.978: 89.9447% ( 49) 00:06:55.242 8922.978 - 8973.391: 90.4899% ( 72) 00:06:55.242 8973.391 - 9023.803: 90.8155% ( 43) 00:06:55.242 9023.803 - 9074.215: 91.1638% ( 46) 00:06:55.242 9074.215 - 9124.628: 91.4894% ( 43) 00:06:55.242 9124.628 - 9175.040: 91.7998% ( 41) 00:06:55.242 9175.040 - 9225.452: 92.1935% ( 52) 00:06:55.242 9225.452 - 9275.865: 92.6100% ( 55) 00:06:55.242 9275.865 - 9326.277: 92.8371% ( 30) 00:06:55.242 9326.277 - 9376.689: 93.0340% ( 26) 00:06:55.242 9376.689 - 9427.102: 93.3974% ( 48) 00:06:55.242 9427.102 - 9477.514: 93.5337% ( 18) 00:06:55.242 9477.514 - 9527.926: 93.6625% ( 17) 00:06:55.242 9527.926 - 9578.338: 93.7836% ( 16) 00:06:55.242 9578.338 - 9628.751: 93.8745% ( 12) 00:06:55.242 9628.751 - 9679.163: 93.9502% ( 10) 00:06:55.242 9679.163 - 9729.575: 94.0108% ( 8) 00:06:55.242 9729.575 - 9779.988: 94.0713% ( 8) 00:06:55.242 9779.988 - 9830.400: 94.1168% ( 6) 00:06:55.242 9830.400 - 9880.812: 94.1319% ( 2) 00:06:55.242 9880.812 - 9931.225: 94.1546% ( 3) 00:06:55.242 9931.225 - 9981.637: 94.1773% ( 3) 00:06:55.242 9981.637 - 10032.049: 94.1925% ( 2) 00:06:55.242 10032.049 - 10082.462: 94.2152% ( 3) 00:06:55.242 10082.462 - 10132.874: 94.2682% ( 7) 00:06:55.242 10132.874 - 10183.286: 94.3515% ( 11) 00:06:55.242 10183.286 - 10233.698: 94.4348% ( 11) 00:06:55.242 10233.698 - 10284.111: 94.5559% ( 16) 00:06:55.242 10284.111 - 10334.523: 94.7906% ( 31) 00:06:55.242 10334.523 - 10384.935: 94.8664% ( 10) 00:06:55.242 10384.935 - 10435.348: 94.9345% ( 9) 00:06:55.242 10435.348 - 10485.760: 95.0027% ( 9) 00:06:55.242 10485.760 - 10536.172: 95.0784% ( 10) 00:06:55.242 10536.172 - 10586.585: 95.1389% ( 8) 00:06:55.242 10586.585 - 10636.997: 95.1919% ( 7) 00:06:55.242 10636.997 - 10687.409: 95.2525% ( 8) 00:06:55.242 10687.409 - 10737.822: 95.2979% ( 6) 00:06:55.242 10737.822 - 10788.234: 95.3510% ( 7) 00:06:55.242 10788.234 - 10838.646: 95.4115% ( 8) 00:06:55.242 10838.646 - 10889.058: 95.4948% ( 11) 00:06:55.242 10889.058 - 10939.471: 95.5781% ( 11) 00:06:55.242 10939.471 - 10989.883: 95.6311% ( 7) 00:06:55.242 10989.883 - 11040.295: 95.6841% ( 7) 00:06:55.242 11040.295 - 11090.708: 95.7371% ( 7) 00:06:55.242 11090.708 - 11141.120: 95.7901% ( 7) 00:06:55.242 11141.120 - 11191.532: 95.8431% ( 7) 00:06:55.242 11191.532 - 11241.945: 95.8658% ( 3) 00:06:55.242 11241.945 - 11292.357: 95.8961% ( 4) 00:06:55.242 11292.357 - 11342.769: 95.9264% ( 4) 00:06:55.242 11342.769 - 11393.182: 95.9491% ( 3) 00:06:55.242 11393.182 - 11443.594: 95.9794% ( 4) 00:06:55.242 11443.594 - 11494.006: 96.0021% ( 3) 00:06:55.242 11494.006 - 11544.418: 96.0248% ( 3) 00:06:55.242 11544.418 - 11594.831: 96.0551% ( 4) 00:06:55.242 11594.831 - 11645.243: 96.0854% ( 4) 00:06:55.242 11645.243 - 11695.655: 96.1081% ( 3) 00:06:55.242 11695.655 - 11746.068: 96.1233% ( 2) 00:06:55.242 14216.271 - 14317.095: 96.1308% ( 1) 00:06:55.242 14317.095 - 14417.920: 96.2066% ( 10) 00:06:55.242 14417.920 - 14518.745: 96.2368% ( 4) 00:06:55.242 14518.745 - 14619.569: 96.2823% ( 6) 00:06:55.242 14619.569 - 14720.394: 96.3201% ( 5) 00:06:55.242 14720.394 - 14821.218: 96.3656% ( 6) 00:06:55.242 14821.218 - 14922.043: 96.4110% ( 6) 00:06:55.242 14922.043 - 15022.868: 96.4489% ( 5) 00:06:55.242 15022.868 - 15123.692: 96.4943% ( 6) 00:06:55.242 15123.692 - 15224.517: 96.5246% ( 4) 00:06:55.242 15224.517 - 15325.342: 96.5700% ( 6) 00:06:55.242 15325.342 - 15426.166: 96.6079% ( 5) 00:06:55.242 16434.412 - 16535.237: 96.6154% ( 1) 00:06:55.242 16535.237 - 16636.062: 96.6609% ( 6) 00:06:55.242 16636.062 - 16736.886: 96.7139% ( 7) 00:06:55.242 16736.886 - 16837.711: 96.7669% ( 7) 00:06:55.242 16837.711 - 16938.535: 96.8047% ( 5) 00:06:55.242 16938.535 - 17039.360: 96.8502% ( 6) 00:06:55.242 17039.360 - 17140.185: 96.8880% ( 5) 00:06:55.243 17140.185 - 17241.009: 96.9183% ( 4) 00:06:55.243 17241.009 - 17341.834: 96.9562% ( 5) 00:06:55.243 17341.834 - 17442.658: 96.9940% ( 5) 00:06:55.243 17442.658 - 17543.483: 97.0470% ( 7) 00:06:55.243 17543.483 - 17644.308: 97.0925% ( 6) 00:06:55.243 20265.748 - 20366.572: 97.1076% ( 2) 00:06:55.243 20366.572 - 20467.397: 97.1379% ( 4) 00:06:55.243 20467.397 - 20568.222: 97.1682% ( 4) 00:06:55.243 20568.222 - 20669.046: 97.1985% ( 4) 00:06:55.243 20669.046 - 20769.871: 97.2287% ( 4) 00:06:55.243 20769.871 - 20870.695: 97.2666% ( 5) 00:06:55.243 20870.695 - 20971.520: 97.2969% ( 4) 00:06:55.243 20971.520 - 21072.345: 97.3272% ( 4) 00:06:55.243 21072.345 - 21173.169: 97.3575% ( 4) 00:06:55.243 21173.169 - 21273.994: 97.3877% ( 4) 00:06:55.243 21273.994 - 21374.818: 97.4180% ( 4) 00:06:55.243 21374.818 - 21475.643: 97.4483% ( 4) 00:06:55.243 21475.643 - 21576.468: 97.4862% ( 5) 00:06:55.243 21576.468 - 21677.292: 97.5165% ( 4) 00:06:55.243 21677.292 - 21778.117: 97.5392% ( 3) 00:06:55.243 21778.117 - 21878.942: 97.5770% ( 5) 00:06:55.243 25306.978 - 25407.803: 97.5922% ( 2) 00:06:55.243 25407.803 - 25508.628: 97.6225% ( 4) 00:06:55.243 25508.628 - 25609.452: 97.6528% ( 4) 00:06:55.243 25609.452 - 25710.277: 97.6830% ( 4) 00:06:55.243 25710.277 - 25811.102: 97.7133% ( 4) 00:06:55.243 25811.102 - 26012.751: 97.7739% ( 8) 00:06:55.243 26012.751 - 26214.400: 97.8345% ( 8) 00:06:55.243 26214.400 - 26416.049: 97.8875% ( 7) 00:06:55.243 26416.049 - 26617.698: 97.9481% ( 8) 00:06:55.243 26617.698 - 26819.348: 98.0162% ( 9) 00:06:55.243 26819.348 - 27020.997: 98.0616% ( 6) 00:06:55.243 149220.431 - 150027.028: 98.1525% ( 12) 00:06:55.243 151640.222 - 152446.818: 98.1676% ( 2) 00:06:55.243 152446.818 - 153253.415: 98.5462% ( 50) 00:06:55.243 153253.415 - 154060.012: 98.9400% ( 52) 00:06:55.243 155673.206 - 156479.803: 98.9551% ( 2) 00:06:55.243 156479.803 - 157286.400: 99.0308% ( 10) 00:06:55.243 214554.782 - 216167.975: 99.2050% ( 23) 00:06:55.243 216167.975 - 217781.169: 99.5154% ( 41) 00:06:55.243 224233.945 - 225847.138: 99.6517% ( 18) 00:06:55.243 225847.138 - 227460.332: 99.6896% ( 5) 00:06:55.243 227460.332 - 229073.526: 99.9167% ( 30) 00:06:55.243 229073.526 - 230686.720: 100.0000% ( 11) 00:06:55.243 00:06:55.243 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:55.243 ============================================================================== 00:06:55.243 Range in us Cumulative IO count 00:06:55.243 5847.828 - 5873.034: 0.0151% ( 2) 00:06:55.243 5873.034 - 5898.240: 0.0453% ( 4) 00:06:55.243 5898.240 - 5923.446: 0.0755% ( 4) 00:06:55.243 5923.446 - 5948.652: 0.1283% ( 7) 00:06:55.243 5948.652 - 5973.858: 0.1585% ( 4) 00:06:55.243 5973.858 - 5999.065: 0.2189% ( 8) 00:06:55.243 5999.065 - 6024.271: 0.2868% ( 9) 00:06:55.243 6024.271 - 6049.477: 0.3774% ( 12) 00:06:55.243 6049.477 - 6074.683: 0.4303% ( 7) 00:06:55.243 6074.683 - 6099.889: 0.6643% ( 31) 00:06:55.243 6099.889 - 6125.095: 0.8152% ( 20) 00:06:55.243 6125.095 - 6150.302: 1.0492% ( 31) 00:06:55.243 6150.302 - 6175.508: 1.3813% ( 44) 00:06:55.243 6175.508 - 6200.714: 1.5399% ( 21) 00:06:55.243 6200.714 - 6225.920: 1.7739% ( 31) 00:06:55.243 6225.920 - 6251.126: 2.2117% ( 58) 00:06:55.243 6251.126 - 6276.332: 2.5966% ( 51) 00:06:55.243 6276.332 - 6301.538: 3.0193% ( 56) 00:06:55.243 6301.538 - 6326.745: 3.5326% ( 68) 00:06:55.243 6326.745 - 6351.951: 3.9704% ( 58) 00:06:55.243 6351.951 - 6377.157: 4.4988% ( 70) 00:06:55.243 6377.157 - 6402.363: 5.1027% ( 80) 00:06:55.243 6402.363 - 6427.569: 6.0236% ( 122) 00:06:55.243 6427.569 - 6452.775: 6.9897% ( 128) 00:06:55.243 6452.775 - 6503.188: 8.9447% ( 259) 00:06:55.243 6503.188 - 6553.600: 11.4357% ( 330) 00:06:55.243 6553.600 - 6604.012: 14.4173% ( 395) 00:06:55.243 6604.012 - 6654.425: 18.6066% ( 555) 00:06:55.243 6654.425 - 6704.837: 23.6941% ( 674) 00:06:55.243 6704.837 - 6755.249: 28.8043% ( 677) 00:06:55.243 6755.249 - 6805.662: 35.1751% ( 844) 00:06:55.243 6805.662 - 6856.074: 40.2929% ( 678) 00:06:55.243 6856.074 - 6906.486: 44.9124% ( 612) 00:06:55.243 6906.486 - 6956.898: 49.4641% ( 603) 00:06:55.243 6956.898 - 7007.311: 54.0836% ( 612) 00:06:55.243 7007.311 - 7057.723: 58.3409% ( 564) 00:06:55.243 7057.723 - 7108.135: 61.2319% ( 383) 00:06:55.243 7108.135 - 7158.548: 64.0851% ( 378) 00:06:55.243 7158.548 - 7208.960: 66.3874% ( 305) 00:06:55.243 7208.960 - 7259.372: 68.8255% ( 323) 00:06:55.243 7259.372 - 7309.785: 70.0936% ( 168) 00:06:55.243 7309.785 - 7360.197: 71.4146% ( 175) 00:06:55.243 7360.197 - 7410.609: 72.3354% ( 122) 00:06:55.243 7410.609 - 7461.022: 72.9620% ( 83) 00:06:55.243 7461.022 - 7511.434: 73.5507% ( 78) 00:06:55.243 7511.434 - 7561.846: 74.0565% ( 67) 00:06:55.243 7561.846 - 7612.258: 74.5622% ( 67) 00:06:55.243 7612.258 - 7662.671: 75.0830% ( 69) 00:06:55.243 7662.671 - 7713.083: 76.0492% ( 128) 00:06:55.243 7713.083 - 7763.495: 76.9852% ( 124) 00:06:55.243 7763.495 - 7813.908: 77.5740% ( 78) 00:06:55.243 7813.908 - 7864.320: 78.3439% ( 102) 00:06:55.243 7864.320 - 7914.732: 79.3403% ( 132) 00:06:55.243 7914.732 - 7965.145: 80.2914% ( 126) 00:06:55.243 7965.145 - 8015.557: 81.3028% ( 134) 00:06:55.243 8015.557 - 8065.969: 82.1784% ( 116) 00:06:55.243 8065.969 - 8116.382: 82.8351% ( 87) 00:06:55.243 8116.382 - 8166.794: 83.4994% ( 88) 00:06:55.243 8166.794 - 8217.206: 84.0655% ( 75) 00:06:55.243 8217.206 - 8267.618: 84.6014% ( 71) 00:06:55.243 8267.618 - 8318.031: 85.0317% ( 57) 00:06:55.243 8318.031 - 8368.443: 85.6658% ( 84) 00:06:55.243 8368.443 - 8418.855: 86.1866% ( 69) 00:06:55.243 8418.855 - 8469.268: 86.7980% ( 81) 00:06:55.243 8469.268 - 8519.680: 87.0924% ( 39) 00:06:55.243 8519.680 - 8570.092: 87.6132% ( 69) 00:06:55.243 8570.092 - 8620.505: 88.1341% ( 69) 00:06:55.243 8620.505 - 8670.917: 88.5417% ( 54) 00:06:55.243 8670.917 - 8721.329: 88.9040% ( 48) 00:06:55.243 8721.329 - 8771.742: 89.1682% ( 35) 00:06:55.243 8771.742 - 8822.154: 89.4097% ( 32) 00:06:55.243 8822.154 - 8872.566: 89.7645% ( 47) 00:06:55.243 8872.566 - 8922.978: 90.2476% ( 64) 00:06:55.243 8922.978 - 8973.391: 90.8665% ( 82) 00:06:55.243 8973.391 - 9023.803: 91.2289% ( 48) 00:06:55.243 9023.803 - 9074.215: 91.4931% ( 35) 00:06:55.243 9074.215 - 9124.628: 91.8327% ( 45) 00:06:55.243 9124.628 - 9175.040: 92.1271% ( 39) 00:06:55.243 9175.040 - 9225.452: 92.4290% ( 40) 00:06:55.243 9225.452 - 9275.865: 92.7159% ( 38) 00:06:55.243 9275.865 - 9326.277: 93.0027% ( 38) 00:06:55.243 9326.277 - 9376.689: 93.3650% ( 48) 00:06:55.243 9376.689 - 9427.102: 93.4707% ( 14) 00:06:55.243 9427.102 - 9477.514: 93.5839% ( 15) 00:06:55.243 9477.514 - 9527.926: 93.6745% ( 12) 00:06:55.243 9527.926 - 9578.338: 93.7500% ( 10) 00:06:55.243 9578.338 - 9628.751: 93.8783% ( 17) 00:06:55.243 9628.751 - 9679.163: 93.9991% ( 16) 00:06:55.243 9679.163 - 9729.575: 94.1199% ( 16) 00:06:55.243 9729.575 - 9779.988: 94.1727% ( 7) 00:06:55.243 9779.988 - 9830.400: 94.1954% ( 3) 00:06:55.243 9830.400 - 9880.812: 94.2029% ( 1) 00:06:55.243 9880.812 - 9931.225: 94.2180% ( 2) 00:06:55.243 9931.225 - 9981.637: 94.2633% ( 6) 00:06:55.243 9981.637 - 10032.049: 94.3161% ( 7) 00:06:55.243 10032.049 - 10082.462: 94.3614% ( 6) 00:06:55.243 10082.462 - 10132.874: 94.4218% ( 8) 00:06:55.243 10132.874 - 10183.286: 94.4369% ( 2) 00:06:55.243 10183.286 - 10233.698: 94.4595% ( 3) 00:06:55.243 10233.698 - 10284.111: 94.4746% ( 2) 00:06:55.243 10284.111 - 10334.523: 94.5199% ( 6) 00:06:55.243 10334.523 - 10384.935: 94.5803% ( 8) 00:06:55.243 10384.935 - 10435.348: 94.6256% ( 6) 00:06:55.243 10435.348 - 10485.760: 94.6860% ( 8) 00:06:55.243 10485.760 - 10536.172: 94.7690% ( 11) 00:06:55.243 10536.172 - 10586.585: 94.9804% ( 28) 00:06:55.243 10586.585 - 10636.997: 95.0634% ( 11) 00:06:55.243 10636.997 - 10687.409: 95.1540% ( 12) 00:06:55.243 10687.409 - 10737.822: 95.2446% ( 12) 00:06:55.243 10737.822 - 10788.234: 95.5314% ( 38) 00:06:55.243 10788.234 - 10838.646: 95.5767% ( 6) 00:06:55.243 10838.646 - 10889.058: 95.6295% ( 7) 00:06:55.243 10889.058 - 10939.471: 95.6899% ( 8) 00:06:55.243 10939.471 - 10989.883: 95.7352% ( 6) 00:06:55.243 10989.883 - 11040.295: 95.7729% ( 5) 00:06:55.243 11040.295 - 11090.708: 95.8031% ( 4) 00:06:55.243 11090.708 - 11141.120: 95.8333% ( 4) 00:06:55.243 11141.120 - 11191.532: 95.8560% ( 3) 00:06:55.243 11191.532 - 11241.945: 95.8862% ( 4) 00:06:55.243 11241.945 - 11292.357: 95.9088% ( 3) 00:06:55.243 11292.357 - 11342.769: 95.9315% ( 3) 00:06:55.243 11342.769 - 11393.182: 95.9541% ( 3) 00:06:55.243 11393.182 - 11443.594: 95.9843% ( 4) 00:06:55.243 11443.594 - 11494.006: 96.0145% ( 4) 00:06:55.243 11494.006 - 11544.418: 96.0371% ( 3) 00:06:55.243 11544.418 - 11594.831: 96.0673% ( 4) 00:06:55.243 11594.831 - 11645.243: 96.0975% ( 4) 00:06:55.243 11645.243 - 11695.655: 96.1277% ( 4) 00:06:55.243 11695.655 - 11746.068: 96.1353% ( 1) 00:06:55.243 13812.972 - 13913.797: 96.2258% ( 12) 00:06:55.243 13913.797 - 14014.622: 96.2862% ( 8) 00:06:55.243 14014.622 - 14115.446: 96.3240% ( 5) 00:06:55.243 14115.446 - 14216.271: 96.3617% ( 5) 00:06:55.243 14216.271 - 14317.095: 96.3995% ( 5) 00:06:55.243 14317.095 - 14417.920: 96.4447% ( 6) 00:06:55.243 14417.920 - 14518.745: 96.4825% ( 5) 00:06:55.243 14518.745 - 14619.569: 96.5278% ( 6) 00:06:55.243 14619.569 - 14720.394: 96.5655% ( 5) 00:06:55.243 14720.394 - 14821.218: 96.6184% ( 7) 00:06:55.244 14821.218 - 14922.043: 96.6561% ( 5) 00:06:55.244 14922.043 - 15022.868: 96.6863% ( 4) 00:06:55.244 15022.868 - 15123.692: 96.7165% ( 4) 00:06:55.244 15123.692 - 15224.517: 96.7467% ( 4) 00:06:55.244 15224.517 - 15325.342: 96.7769% ( 4) 00:06:55.244 15325.342 - 15426.166: 96.8071% ( 4) 00:06:55.244 15426.166 - 15526.991: 96.8373% ( 4) 00:06:55.244 15526.991 - 15627.815: 96.8675% ( 4) 00:06:55.244 15627.815 - 15728.640: 96.9052% ( 5) 00:06:55.244 15728.640 - 15829.465: 96.9354% ( 4) 00:06:55.244 15829.465 - 15930.289: 96.9656% ( 4) 00:06:55.244 15930.289 - 16031.114: 96.9958% ( 4) 00:06:55.244 16031.114 - 16131.938: 97.0260% ( 4) 00:06:55.244 16131.938 - 16232.763: 97.0637% ( 5) 00:06:55.244 16232.763 - 16333.588: 97.1543% ( 12) 00:06:55.244 16333.588 - 16434.412: 97.2222% ( 9) 00:06:55.244 16434.412 - 16535.237: 97.2751% ( 7) 00:06:55.244 16535.237 - 16636.062: 97.3204% ( 6) 00:06:55.244 16636.062 - 16736.886: 97.3656% ( 6) 00:06:55.244 16736.886 - 16837.711: 97.4185% ( 7) 00:06:55.244 16837.711 - 16938.535: 97.4789% ( 8) 00:06:55.244 16938.535 - 17039.360: 97.5317% ( 7) 00:06:55.244 17039.360 - 17140.185: 97.5770% ( 6) 00:06:55.244 17140.185 - 17241.009: 97.5845% ( 1) 00:06:55.244 19660.800 - 19761.625: 97.5996% ( 2) 00:06:55.244 19761.625 - 19862.449: 97.6298% ( 4) 00:06:55.244 19862.449 - 19963.274: 97.6600% ( 4) 00:06:55.244 19963.274 - 20064.098: 97.6902% ( 4) 00:06:55.244 20064.098 - 20164.923: 97.7204% ( 4) 00:06:55.244 20164.923 - 20265.748: 97.7582% ( 5) 00:06:55.244 20265.748 - 20366.572: 97.7883% ( 4) 00:06:55.244 20366.572 - 20467.397: 97.8185% ( 4) 00:06:55.244 20467.397 - 20568.222: 97.8487% ( 4) 00:06:55.244 20568.222 - 20669.046: 97.8789% ( 4) 00:06:55.244 20669.046 - 20769.871: 97.9167% ( 5) 00:06:55.244 20769.871 - 20870.695: 97.9469% ( 4) 00:06:55.244 20870.695 - 20971.520: 97.9771% ( 4) 00:06:55.244 20971.520 - 21072.345: 98.0072% ( 4) 00:06:55.244 21072.345 - 21173.169: 98.0374% ( 4) 00:06:55.244 21173.169 - 21273.994: 98.0676% ( 4) 00:06:55.244 152446.818 - 153253.415: 98.5507% ( 64) 00:06:55.244 153253.415 - 154060.012: 98.9659% ( 55) 00:06:55.244 154060.012 - 154866.609: 99.0338% ( 9) 00:06:55.244 217781.169 - 219394.363: 99.2829% ( 33) 00:06:55.244 219394.363 - 221007.557: 99.4490% ( 22) 00:06:55.244 221007.557 - 222620.751: 99.5169% ( 9) 00:06:55.244 225847.138 - 227460.332: 99.7509% ( 31) 00:06:55.244 229073.526 - 230686.720: 100.0000% ( 33) 00:06:55.244 00:06:55.244 06:05:14 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:06:55.244 00:06:55.244 real 0m2.654s 00:06:55.244 user 0m2.344s 00:06:55.244 sys 0m0.204s 00:06:55.244 06:05:14 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.244 06:05:14 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.244 ************************************ 00:06:55.244 END TEST nvme_perf 00:06:55.244 ************************************ 00:06:55.244 06:05:14 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:55.244 06:05:14 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:55.244 06:05:14 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.244 06:05:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.244 ************************************ 00:06:55.244 START TEST nvme_hello_world 00:06:55.244 ************************************ 00:06:55.244 06:05:14 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:55.505 Initializing NVMe Controllers 00:06:55.505 Attached to 0000:00:10.0 00:06:55.505 Namespace ID: 1 size: 6GB 00:06:55.505 Attached to 0000:00:11.0 00:06:55.505 Namespace ID: 1 size: 5GB 00:06:55.505 Attached to 0000:00:13.0 00:06:55.505 Namespace ID: 1 size: 1GB 00:06:55.506 Attached to 0000:00:12.0 00:06:55.506 Namespace ID: 1 size: 4GB 00:06:55.506 Namespace ID: 2 size: 4GB 00:06:55.506 Namespace ID: 3 size: 4GB 00:06:55.506 Initialization complete. 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 INFO: using host memory buffer for IO 00:06:55.506 Hello world! 00:06:55.506 00:06:55.506 real 0m0.244s 00:06:55.506 user 0m0.089s 00:06:55.506 sys 0m0.106s 00:06:55.506 06:05:14 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.506 06:05:14 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:55.506 ************************************ 00:06:55.506 END TEST nvme_hello_world 00:06:55.506 ************************************ 00:06:55.506 06:05:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:55.506 06:05:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.506 06:05:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.506 06:05:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.506 ************************************ 00:06:55.506 START TEST nvme_sgl 00:06:55.506 ************************************ 00:06:55.506 06:05:15 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:55.767 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:06:55.767 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:06:56.093 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:06:56.093 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:06:56.093 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:06:56.093 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:06:56.093 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:06:56.093 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:06:56.093 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:06:56.093 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:06:56.093 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:06:56.093 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:06:56.093 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:06:56.094 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:06:56.094 NVMe Readv/Writev Request test 00:06:56.094 Attached to 0000:00:10.0 00:06:56.094 Attached to 0000:00:11.0 00:06:56.094 Attached to 0000:00:13.0 00:06:56.094 Attached to 0000:00:12.0 00:06:56.094 0000:00:10.0: build_io_request_2 test passed 00:06:56.094 0000:00:10.0: build_io_request_4 test passed 00:06:56.094 0000:00:10.0: build_io_request_5 test passed 00:06:56.094 0000:00:10.0: build_io_request_6 test passed 00:06:56.094 0000:00:10.0: build_io_request_7 test passed 00:06:56.094 0000:00:10.0: build_io_request_10 test passed 00:06:56.094 0000:00:11.0: build_io_request_2 test passed 00:06:56.094 0000:00:11.0: build_io_request_4 test passed 00:06:56.094 0000:00:11.0: build_io_request_5 test passed 00:06:56.094 0000:00:11.0: build_io_request_6 test passed 00:06:56.094 0000:00:11.0: build_io_request_7 test passed 00:06:56.094 0000:00:11.0: build_io_request_10 test passed 00:06:56.094 Cleaning up... 00:06:56.094 00:06:56.094 real 0m0.573s 00:06:56.094 user 0m0.419s 00:06:56.094 sys 0m0.101s 00:06:56.094 06:05:15 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.094 ************************************ 00:06:56.094 END TEST nvme_sgl 00:06:56.094 06:05:15 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:06:56.094 ************************************ 00:06:56.094 06:05:15 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:56.094 06:05:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.094 06:05:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.094 06:05:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.094 ************************************ 00:06:56.094 START TEST nvme_e2edp 00:06:56.094 ************************************ 00:06:56.094 06:05:15 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:56.369 NVMe Write/Read with End-to-End data protection test 00:06:56.369 Attached to 0000:00:10.0 00:06:56.369 Attached to 0000:00:11.0 00:06:56.369 Attached to 0000:00:13.0 00:06:56.369 Attached to 0000:00:12.0 00:06:56.369 Cleaning up... 00:06:56.369 ************************************ 00:06:56.369 END TEST nvme_e2edp 00:06:56.369 ************************************ 00:06:56.369 00:06:56.369 real 0m0.222s 00:06:56.369 user 0m0.077s 00:06:56.369 sys 0m0.094s 00:06:56.369 06:05:15 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.369 06:05:15 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:06:56.369 06:05:15 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:56.369 06:05:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.369 06:05:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.369 06:05:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.369 ************************************ 00:06:56.369 START TEST nvme_reserve 00:06:56.369 ************************************ 00:06:56.369 06:05:15 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:56.630 ===================================================== 00:06:56.630 NVMe Controller at PCI bus 0, device 16, function 0 00:06:56.630 ===================================================== 00:06:56.630 Reservations: Not Supported 00:06:56.630 ===================================================== 00:06:56.630 NVMe Controller at PCI bus 0, device 17, function 0 00:06:56.630 ===================================================== 00:06:56.630 Reservations: Not Supported 00:06:56.630 ===================================================== 00:06:56.630 NVMe Controller at PCI bus 0, device 19, function 0 00:06:56.630 ===================================================== 00:06:56.630 Reservations: Not Supported 00:06:56.630 ===================================================== 00:06:56.630 NVMe Controller at PCI bus 0, device 18, function 0 00:06:56.630 ===================================================== 00:06:56.630 Reservations: Not Supported 00:06:56.630 Reservation test passed 00:06:56.630 ************************************ 00:06:56.630 END TEST nvme_reserve 00:06:56.630 ************************************ 00:06:56.630 00:06:56.630 real 0m0.212s 00:06:56.630 user 0m0.072s 00:06:56.630 sys 0m0.094s 00:06:56.630 06:05:16 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.630 06:05:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:06:56.630 06:05:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:56.630 06:05:16 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.630 06:05:16 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.630 06:05:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.630 ************************************ 00:06:56.630 START TEST nvme_err_injection 00:06:56.630 ************************************ 00:06:56.630 06:05:16 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:56.891 NVMe Error Injection test 00:06:56.891 Attached to 0000:00:10.0 00:06:56.891 Attached to 0000:00:11.0 00:06:56.891 Attached to 0000:00:13.0 00:06:56.891 Attached to 0000:00:12.0 00:06:56.891 0000:00:10.0: get features failed as expected 00:06:56.891 0000:00:11.0: get features failed as expected 00:06:56.891 0000:00:13.0: get features failed as expected 00:06:56.891 0000:00:12.0: get features failed as expected 00:06:56.891 0000:00:12.0: get features successfully as expected 00:06:56.891 0000:00:10.0: get features successfully as expected 00:06:56.891 0000:00:11.0: get features successfully as expected 00:06:56.891 0000:00:13.0: get features successfully as expected 00:06:56.891 0000:00:12.0: read failed as expected 00:06:56.891 0000:00:10.0: read failed as expected 00:06:56.891 0000:00:11.0: read failed as expected 00:06:56.891 0000:00:13.0: read failed as expected 00:06:56.891 0000:00:10.0: read successfully as expected 00:06:56.891 0000:00:11.0: read successfully as expected 00:06:56.891 0000:00:13.0: read successfully as expected 00:06:56.891 0000:00:12.0: read successfully as expected 00:06:56.891 Cleaning up... 00:06:56.891 00:06:56.891 real 0m0.223s 00:06:56.891 user 0m0.077s 00:06:56.891 sys 0m0.108s 00:06:56.891 ************************************ 00:06:56.891 END TEST nvme_err_injection 00:06:56.891 ************************************ 00:06:56.891 06:05:16 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.891 06:05:16 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:06:56.891 06:05:16 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:56.891 06:05:16 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:06:56.891 06:05:16 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.891 06:05:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.891 ************************************ 00:06:56.891 START TEST nvme_overhead 00:06:56.891 ************************************ 00:06:56.892 06:05:16 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:58.272 Initializing NVMe Controllers 00:06:58.272 Attached to 0000:00:10.0 00:06:58.272 Attached to 0000:00:11.0 00:06:58.272 Attached to 0000:00:13.0 00:06:58.272 Attached to 0000:00:12.0 00:06:58.272 Initialization complete. Launching workers. 00:06:58.272 submit (in ns) avg, min, max = 11678.7, 9667.7, 91360.0 00:06:58.272 complete (in ns) avg, min, max = 7712.5, 7186.2, 93471.5 00:06:58.272 00:06:58.272 Submit histogram 00:06:58.272 ================ 00:06:58.272 Range in us Cumulative Count 00:06:58.272 9.649 - 9.698: 0.0256% ( 2) 00:06:58.272 9.994 - 10.043: 0.0384% ( 1) 00:06:58.272 10.142 - 10.191: 0.0512% ( 1) 00:06:58.272 10.289 - 10.338: 0.0769% ( 2) 00:06:58.272 10.338 - 10.388: 0.1025% ( 2) 00:06:58.272 10.388 - 10.437: 0.1409% ( 3) 00:06:58.272 10.437 - 10.486: 0.2178% ( 6) 00:06:58.272 10.486 - 10.535: 0.2434% ( 2) 00:06:58.272 10.535 - 10.585: 0.2690% ( 2) 00:06:58.272 10.585 - 10.634: 0.3331% ( 5) 00:06:58.272 10.634 - 10.683: 0.4484% ( 9) 00:06:58.272 10.683 - 10.732: 0.9096% ( 36) 00:06:58.272 10.732 - 10.782: 2.9465% ( 159) 00:06:58.272 10.782 - 10.831: 7.1227% ( 326) 00:06:58.272 10.831 - 10.880: 13.5024% ( 498) 00:06:58.272 10.880 - 10.929: 21.6500% ( 636) 00:06:58.272 10.929 - 10.978: 29.4517% ( 609) 00:06:58.272 10.978 - 11.028: 36.1901% ( 526) 00:06:58.272 11.028 - 11.077: 42.0958% ( 461) 00:06:58.272 11.077 - 11.126: 47.3098% ( 407) 00:06:58.272 11.126 - 11.175: 51.9857% ( 365) 00:06:58.272 11.175 - 11.225: 55.9826% ( 312) 00:06:58.272 11.225 - 11.274: 59.7745% ( 296) 00:06:58.272 11.274 - 11.323: 63.0028% ( 252) 00:06:58.272 11.323 - 11.372: 66.3464% ( 261) 00:06:58.272 11.372 - 11.422: 69.4081% ( 239) 00:06:58.272 11.422 - 11.471: 71.8166% ( 188) 00:06:58.272 11.471 - 11.520: 74.1609% ( 183) 00:06:58.272 11.520 - 11.569: 76.2106% ( 160) 00:06:58.272 11.569 - 11.618: 77.7095% ( 117) 00:06:58.272 11.618 - 11.668: 78.9008% ( 93) 00:06:58.272 11.668 - 11.717: 80.3484% ( 113) 00:06:58.272 11.717 - 11.766: 81.6039% ( 98) 00:06:58.272 11.766 - 11.815: 82.6159% ( 79) 00:06:58.272 11.815 - 11.865: 83.6920% ( 84) 00:06:58.272 11.865 - 11.914: 84.6272% ( 73) 00:06:58.272 11.914 - 11.963: 85.9211% ( 101) 00:06:58.272 11.963 - 12.012: 87.0484% ( 88) 00:06:58.272 12.012 - 12.062: 87.8811% ( 65) 00:06:58.272 12.062 - 12.111: 88.7907% ( 71) 00:06:58.272 12.111 - 12.160: 89.4568% ( 52) 00:06:58.272 12.160 - 12.209: 89.8796% ( 33) 00:06:58.272 12.209 - 12.258: 90.2895% ( 32) 00:06:58.272 12.258 - 12.308: 90.5970% ( 24) 00:06:58.272 12.308 - 12.357: 90.8019% ( 16) 00:06:58.272 12.357 - 12.406: 90.9044% ( 8) 00:06:58.272 12.406 - 12.455: 91.0325% ( 10) 00:06:58.272 12.455 - 12.505: 91.1478% ( 9) 00:06:58.272 12.505 - 12.554: 91.3016% ( 12) 00:06:58.272 12.554 - 12.603: 91.4169% ( 9) 00:06:58.272 12.603 - 12.702: 91.6090% ( 15) 00:06:58.272 12.702 - 12.800: 91.9549% ( 27) 00:06:58.272 12.800 - 12.898: 92.1599% ( 16) 00:06:58.272 12.898 - 12.997: 92.3392% ( 14) 00:06:58.272 12.997 - 13.095: 92.6467% ( 24) 00:06:58.272 13.095 - 13.194: 92.9541% ( 24) 00:06:58.272 13.194 - 13.292: 93.2104% ( 20) 00:06:58.272 13.292 - 13.391: 93.4153% ( 16) 00:06:58.272 13.391 - 13.489: 93.6331% ( 17) 00:06:58.272 13.489 - 13.588: 93.7740% ( 11) 00:06:58.272 13.588 - 13.686: 93.9662% ( 15) 00:06:58.272 13.686 - 13.785: 94.1840% ( 17) 00:06:58.272 13.785 - 13.883: 94.2480% ( 5) 00:06:58.272 13.883 - 13.982: 94.3761% ( 10) 00:06:58.272 13.982 - 14.080: 94.4658% ( 7) 00:06:58.272 14.080 - 14.178: 94.6067% ( 11) 00:06:58.272 14.178 - 14.277: 94.6964% ( 7) 00:06:58.272 14.277 - 14.375: 94.8245% ( 10) 00:06:58.272 14.375 - 14.474: 94.9270% ( 8) 00:06:58.272 14.474 - 14.572: 95.0679% ( 11) 00:06:58.272 14.572 - 14.671: 95.1191% ( 4) 00:06:58.272 14.671 - 14.769: 95.2344% ( 9) 00:06:58.273 14.769 - 14.868: 95.2985% ( 5) 00:06:58.273 14.868 - 14.966: 95.3625% ( 5) 00:06:58.273 14.966 - 15.065: 95.4650% ( 8) 00:06:58.273 15.065 - 15.163: 95.6188% ( 12) 00:06:58.273 15.163 - 15.262: 95.7341% ( 9) 00:06:58.273 15.262 - 15.360: 95.9262% ( 15) 00:06:58.273 15.360 - 15.458: 95.9775% ( 4) 00:06:58.273 15.458 - 15.557: 96.1184% ( 11) 00:06:58.273 15.557 - 15.655: 96.2080% ( 7) 00:06:58.273 15.655 - 15.754: 96.2721% ( 5) 00:06:58.273 15.754 - 15.852: 96.4002% ( 10) 00:06:58.273 15.852 - 15.951: 96.5027% ( 8) 00:06:58.273 15.951 - 16.049: 96.6949% ( 15) 00:06:58.273 16.049 - 16.148: 96.7461% ( 4) 00:06:58.273 16.148 - 16.246: 96.8358% ( 7) 00:06:58.273 16.246 - 16.345: 96.9254% ( 7) 00:06:58.273 16.345 - 16.443: 97.0664% ( 11) 00:06:58.273 16.443 - 16.542: 97.2201% ( 12) 00:06:58.273 16.542 - 16.640: 97.3482% ( 10) 00:06:58.273 16.640 - 16.738: 97.4379% ( 7) 00:06:58.273 16.738 - 16.837: 97.5660% ( 10) 00:06:58.273 16.837 - 16.935: 97.6941% ( 10) 00:06:58.273 16.935 - 17.034: 97.7325% ( 3) 00:06:58.273 17.034 - 17.132: 97.8606% ( 10) 00:06:58.273 17.132 - 17.231: 97.9247% ( 5) 00:06:58.273 17.231 - 17.329: 98.0528% ( 10) 00:06:58.273 17.329 - 17.428: 98.1168% ( 5) 00:06:58.273 17.428 - 17.526: 98.1681% ( 4) 00:06:58.273 17.526 - 17.625: 98.2578% ( 7) 00:06:58.273 17.625 - 17.723: 98.3346% ( 6) 00:06:58.273 17.723 - 17.822: 98.3987% ( 5) 00:06:58.273 17.822 - 17.920: 98.4755% ( 6) 00:06:58.273 17.920 - 18.018: 98.5780% ( 8) 00:06:58.273 18.018 - 18.117: 98.6164% ( 3) 00:06:58.273 18.117 - 18.215: 98.6293% ( 1) 00:06:58.273 18.215 - 18.314: 98.6933% ( 5) 00:06:58.273 18.314 - 18.412: 98.7702% ( 6) 00:06:58.273 18.412 - 18.511: 98.8342% ( 5) 00:06:58.273 18.511 - 18.609: 98.8855% ( 4) 00:06:58.273 18.609 - 18.708: 98.9111% ( 2) 00:06:58.273 18.708 - 18.806: 99.0008% ( 7) 00:06:58.273 18.806 - 18.905: 99.0136% ( 1) 00:06:58.273 18.905 - 19.003: 99.0392% ( 2) 00:06:58.273 19.003 - 19.102: 99.0904% ( 4) 00:06:58.273 19.102 - 19.200: 99.1289% ( 3) 00:06:58.273 19.200 - 19.298: 99.1673% ( 3) 00:06:58.273 19.298 - 19.397: 99.1929% ( 2) 00:06:58.273 19.397 - 19.495: 99.2185% ( 2) 00:06:58.273 19.594 - 19.692: 99.2570% ( 3) 00:06:58.273 19.692 - 19.791: 99.2826% ( 2) 00:06:58.273 19.791 - 19.889: 99.3082% ( 2) 00:06:58.273 19.889 - 19.988: 99.3210% ( 1) 00:06:58.273 19.988 - 20.086: 99.3338% ( 1) 00:06:58.273 20.185 - 20.283: 99.3595% ( 2) 00:06:58.273 20.283 - 20.382: 99.3851% ( 2) 00:06:58.273 20.382 - 20.480: 99.3979% ( 1) 00:06:58.273 20.578 - 20.677: 99.4107% ( 1) 00:06:58.273 20.677 - 20.775: 99.4235% ( 1) 00:06:58.273 20.775 - 20.874: 99.4363% ( 1) 00:06:58.273 20.874 - 20.972: 99.4491% ( 1) 00:06:58.273 21.071 - 21.169: 99.4620% ( 1) 00:06:58.273 21.169 - 21.268: 99.4748% ( 1) 00:06:58.273 21.268 - 21.366: 99.4876% ( 1) 00:06:58.273 21.465 - 21.563: 99.5132% ( 2) 00:06:58.273 21.662 - 21.760: 99.5260% ( 1) 00:06:58.273 21.858 - 21.957: 99.5388% ( 1) 00:06:58.273 21.957 - 22.055: 99.5516% ( 1) 00:06:58.273 22.055 - 22.154: 99.5644% ( 1) 00:06:58.273 22.252 - 22.351: 99.5772% ( 1) 00:06:58.273 22.351 - 22.449: 99.5901% ( 1) 00:06:58.273 22.449 - 22.548: 99.6029% ( 1) 00:06:58.273 22.548 - 22.646: 99.6157% ( 1) 00:06:58.273 22.942 - 23.040: 99.6285% ( 1) 00:06:58.273 23.828 - 23.926: 99.6413% ( 1) 00:06:58.273 23.926 - 24.025: 99.6541% ( 1) 00:06:58.273 25.108 - 25.206: 99.6669% ( 1) 00:06:58.273 25.403 - 25.600: 99.6797% ( 1) 00:06:58.273 25.994 - 26.191: 99.7054% ( 2) 00:06:58.273 26.585 - 26.782: 99.7182% ( 1) 00:06:58.273 26.782 - 26.978: 99.7310% ( 1) 00:06:58.273 27.372 - 27.569: 99.7438% ( 1) 00:06:58.273 27.766 - 27.963: 99.7566% ( 1) 00:06:58.273 35.643 - 35.840: 99.7694% ( 1) 00:06:58.273 36.037 - 36.234: 99.7822% ( 1) 00:06:58.273 36.234 - 36.431: 99.8078% ( 2) 00:06:58.273 37.218 - 37.415: 99.8207% ( 1) 00:06:58.273 38.991 - 39.188: 99.8335% ( 1) 00:06:58.273 43.717 - 43.914: 99.8591% ( 2) 00:06:58.273 44.308 - 44.505: 99.8719% ( 1) 00:06:58.273 45.292 - 45.489: 99.8847% ( 1) 00:06:58.273 46.671 - 46.868: 99.8975% ( 1) 00:06:58.273 55.532 - 55.926: 99.9103% ( 1) 00:06:58.273 56.320 - 56.714: 99.9231% ( 1) 00:06:58.273 56.714 - 57.108: 99.9359% ( 1) 00:06:58.273 62.622 - 63.015: 99.9488% ( 1) 00:06:58.273 63.803 - 64.197: 99.9616% ( 1) 00:06:58.273 67.348 - 67.742: 99.9744% ( 1) 00:06:58.273 81.132 - 81.526: 99.9872% ( 1) 00:06:58.273 90.978 - 91.372: 100.0000% ( 1) 00:06:58.273 00:06:58.273 Complete histogram 00:06:58.273 ================== 00:06:58.273 Range in us Cumulative Count 00:06:58.273 7.138 - 7.188: 0.0128% ( 1) 00:06:58.273 7.188 - 7.237: 0.2946% ( 22) 00:06:58.273 7.237 - 7.286: 3.4333% ( 245) 00:06:58.273 7.286 - 7.335: 15.0141% ( 904) 00:06:58.273 7.335 - 7.385: 34.2941% ( 1505) 00:06:58.273 7.385 - 7.434: 52.7159% ( 1438) 00:06:58.273 7.434 - 7.483: 65.9365% ( 1032) 00:06:58.273 7.483 - 7.532: 74.5965% ( 676) 00:06:58.273 7.532 - 7.582: 81.2324% ( 518) 00:06:58.273 7.582 - 7.631: 85.8442% ( 360) 00:06:58.273 7.631 - 7.680: 88.9060% ( 239) 00:06:58.273 7.680 - 7.729: 90.4561% ( 121) 00:06:58.273 7.729 - 7.778: 91.4040% ( 74) 00:06:58.273 7.778 - 7.828: 92.0061% ( 47) 00:06:58.273 7.828 - 7.877: 92.5570% ( 43) 00:06:58.273 7.877 - 7.926: 92.7876% ( 18) 00:06:58.273 7.926 - 7.975: 93.0566% ( 21) 00:06:58.273 7.975 - 8.025: 93.4153% ( 28) 00:06:58.273 8.025 - 8.074: 93.6843% ( 21) 00:06:58.273 8.074 - 8.123: 94.0815% ( 31) 00:06:58.273 8.123 - 8.172: 94.4017% ( 25) 00:06:58.273 8.172 - 8.222: 94.7861% ( 30) 00:06:58.273 8.222 - 8.271: 95.0551% ( 21) 00:06:58.273 8.271 - 8.320: 95.3113% ( 20) 00:06:58.273 8.320 - 8.369: 95.4650% ( 12) 00:06:58.273 8.369 - 8.418: 95.5931% ( 10) 00:06:58.273 8.418 - 8.468: 95.6572% ( 5) 00:06:58.273 8.468 - 8.517: 95.8109% ( 12) 00:06:58.273 8.517 - 8.566: 95.9134% ( 8) 00:06:58.273 8.566 - 8.615: 96.0031% ( 7) 00:06:58.273 8.615 - 8.665: 96.0415% ( 3) 00:06:58.273 8.665 - 8.714: 96.1312% ( 7) 00:06:58.273 8.714 - 8.763: 96.1952% ( 5) 00:06:58.273 8.763 - 8.812: 96.3105% ( 9) 00:06:58.273 8.812 - 8.862: 96.3874% ( 6) 00:06:58.273 8.862 - 8.911: 96.4258% ( 3) 00:06:58.273 8.911 - 8.960: 96.5027% ( 6) 00:06:58.273 8.960 - 9.009: 96.5411% ( 3) 00:06:58.273 9.009 - 9.058: 96.5539% ( 1) 00:06:58.273 9.058 - 9.108: 96.6052% ( 4) 00:06:58.273 9.108 - 9.157: 96.6308% ( 2) 00:06:58.273 9.157 - 9.206: 96.6564% ( 2) 00:06:58.273 9.206 - 9.255: 96.6949% ( 3) 00:06:58.273 9.255 - 9.305: 96.7205% ( 2) 00:06:58.273 9.305 - 9.354: 96.7589% ( 3) 00:06:58.273 9.354 - 9.403: 96.7845% ( 2) 00:06:58.273 9.403 - 9.452: 96.7973% ( 1) 00:06:58.273 9.452 - 9.502: 96.8358% ( 3) 00:06:58.273 9.502 - 9.551: 96.8742% ( 3) 00:06:58.273 9.551 - 9.600: 96.8870% ( 1) 00:06:58.273 9.600 - 9.649: 96.8998% ( 1) 00:06:58.273 9.649 - 9.698: 96.9126% ( 1) 00:06:58.273 9.698 - 9.748: 96.9383% ( 2) 00:06:58.273 9.748 - 9.797: 96.9639% ( 2) 00:06:58.273 9.797 - 9.846: 96.9895% ( 2) 00:06:58.273 9.846 - 9.895: 97.0279% ( 3) 00:06:58.273 9.895 - 9.945: 97.0535% ( 2) 00:06:58.273 9.945 - 9.994: 97.0920% ( 3) 00:06:58.273 10.092 - 10.142: 97.1817% ( 7) 00:06:58.273 10.142 - 10.191: 97.2073% ( 2) 00:06:58.273 10.191 - 10.240: 97.2713% ( 5) 00:06:58.274 10.240 - 10.289: 97.2970% ( 2) 00:06:58.274 10.338 - 10.388: 97.3226% ( 2) 00:06:58.274 10.388 - 10.437: 97.3354% ( 1) 00:06:58.274 10.437 - 10.486: 97.3738% ( 3) 00:06:58.274 10.486 - 10.535: 97.5019% ( 10) 00:06:58.274 10.535 - 10.585: 97.5147% ( 1) 00:06:58.274 10.585 - 10.634: 97.5660% ( 4) 00:06:58.274 10.634 - 10.683: 97.6044% ( 3) 00:06:58.274 10.683 - 10.732: 97.6428% ( 3) 00:06:58.274 10.732 - 10.782: 97.6556% ( 1) 00:06:58.274 10.782 - 10.831: 97.7197% ( 5) 00:06:58.274 10.831 - 10.880: 97.7581% ( 3) 00:06:58.274 10.880 - 10.929: 97.7838% ( 2) 00:06:58.274 10.929 - 10.978: 97.8478% ( 5) 00:06:58.274 10.978 - 11.028: 97.8606% ( 1) 00:06:58.274 11.028 - 11.077: 97.8734% ( 1) 00:06:58.274 11.077 - 11.126: 97.8862% ( 1) 00:06:58.274 11.126 - 11.175: 97.9247% ( 3) 00:06:58.274 11.175 - 11.225: 97.9503% ( 2) 00:06:58.274 11.225 - 11.274: 97.9631% ( 1) 00:06:58.274 11.274 - 11.323: 98.0400% ( 6) 00:06:58.274 11.323 - 11.372: 98.0528% ( 1) 00:06:58.274 11.372 - 11.422: 98.0912% ( 3) 00:06:58.274 11.520 - 11.569: 98.1168% ( 2) 00:06:58.274 11.618 - 11.668: 98.1425% ( 2) 00:06:58.274 11.668 - 11.717: 98.1681% ( 2) 00:06:58.274 11.717 - 11.766: 98.1937% ( 2) 00:06:58.274 11.766 - 11.815: 98.2065% ( 1) 00:06:58.274 11.914 - 11.963: 98.2193% ( 1) 00:06:58.274 12.308 - 12.357: 98.2321% ( 1) 00:06:58.274 12.357 - 12.406: 98.2449% ( 1) 00:06:58.274 12.554 - 12.603: 98.2578% ( 1) 00:06:58.274 12.800 - 12.898: 98.2834% ( 2) 00:06:58.274 12.898 - 12.997: 98.3218% ( 3) 00:06:58.274 12.997 - 13.095: 98.3859% ( 5) 00:06:58.274 13.095 - 13.194: 98.4499% ( 5) 00:06:58.274 13.194 - 13.292: 98.4755% ( 2) 00:06:58.274 13.292 - 13.391: 98.5140% ( 3) 00:06:58.274 13.391 - 13.489: 98.5652% ( 4) 00:06:58.274 13.489 - 13.588: 98.6421% ( 6) 00:06:58.274 13.588 - 13.686: 98.7061% ( 5) 00:06:58.274 13.686 - 13.785: 98.7958% ( 7) 00:06:58.274 13.785 - 13.883: 98.8342% ( 3) 00:06:58.274 13.883 - 13.982: 98.9495% ( 9) 00:06:58.274 13.982 - 14.080: 99.0264% ( 6) 00:06:58.274 14.080 - 14.178: 99.0648% ( 3) 00:06:58.274 14.178 - 14.277: 99.1801% ( 9) 00:06:58.274 14.375 - 14.474: 99.1929% ( 1) 00:06:58.274 14.474 - 14.572: 99.2057% ( 1) 00:06:58.274 14.572 - 14.671: 99.2314% ( 2) 00:06:58.274 14.671 - 14.769: 99.2698% ( 3) 00:06:58.274 14.769 - 14.868: 99.2826% ( 1) 00:06:58.274 14.868 - 14.966: 99.3082% ( 2) 00:06:58.274 14.966 - 15.065: 99.3338% ( 2) 00:06:58.274 15.163 - 15.262: 99.3467% ( 1) 00:06:58.274 15.360 - 15.458: 99.3595% ( 1) 00:06:58.274 15.655 - 15.754: 99.3723% ( 1) 00:06:58.274 15.852 - 15.951: 99.3851% ( 1) 00:06:58.274 15.951 - 16.049: 99.3979% ( 1) 00:06:58.274 16.246 - 16.345: 99.4107% ( 1) 00:06:58.274 16.345 - 16.443: 99.4235% ( 1) 00:06:58.274 16.738 - 16.837: 99.4363% ( 1) 00:06:58.274 16.935 - 17.034: 99.4491% ( 1) 00:06:58.274 17.526 - 17.625: 99.4620% ( 1) 00:06:58.274 17.723 - 17.822: 99.4748% ( 1) 00:06:58.274 17.920 - 18.018: 99.4876% ( 1) 00:06:58.274 18.117 - 18.215: 99.5004% ( 1) 00:06:58.274 18.412 - 18.511: 99.5132% ( 1) 00:06:58.274 18.708 - 18.806: 99.5260% ( 1) 00:06:58.274 19.102 - 19.200: 99.5388% ( 1) 00:06:58.274 19.200 - 19.298: 99.5644% ( 2) 00:06:58.274 19.495 - 19.594: 99.5772% ( 1) 00:06:58.274 19.692 - 19.791: 99.5901% ( 1) 00:06:58.274 19.988 - 20.086: 99.6029% ( 1) 00:06:58.274 20.283 - 20.382: 99.6157% ( 1) 00:06:58.274 20.382 - 20.480: 99.6413% ( 2) 00:06:58.274 20.578 - 20.677: 99.6541% ( 1) 00:06:58.274 21.268 - 21.366: 99.6797% ( 2) 00:06:58.274 21.366 - 21.465: 99.6925% ( 1) 00:06:58.274 21.563 - 21.662: 99.7054% ( 1) 00:06:58.274 21.662 - 21.760: 99.7182% ( 1) 00:06:58.274 21.760 - 21.858: 99.7310% ( 1) 00:06:58.274 21.957 - 22.055: 99.7438% ( 1) 00:06:58.274 22.548 - 22.646: 99.7566% ( 1) 00:06:58.274 22.745 - 22.843: 99.7694% ( 1) 00:06:58.274 22.843 - 22.942: 99.7822% ( 1) 00:06:58.274 23.138 - 23.237: 99.7950% ( 1) 00:06:58.274 23.631 - 23.729: 99.8078% ( 1) 00:06:58.274 23.729 - 23.828: 99.8207% ( 1) 00:06:58.274 26.388 - 26.585: 99.8463% ( 2) 00:06:58.274 29.538 - 29.735: 99.8591% ( 1) 00:06:58.274 34.855 - 35.052: 99.8847% ( 2) 00:06:58.274 35.446 - 35.643: 99.8975% ( 1) 00:06:58.274 39.582 - 39.778: 99.9103% ( 1) 00:06:58.274 41.354 - 41.551: 99.9231% ( 1) 00:06:58.274 41.945 - 42.142: 99.9359% ( 1) 00:06:58.274 42.338 - 42.535: 99.9488% ( 1) 00:06:58.274 42.929 - 43.126: 99.9616% ( 1) 00:06:58.274 46.277 - 46.474: 99.9744% ( 1) 00:06:58.274 51.988 - 52.382: 99.9872% ( 1) 00:06:58.274 93.342 - 93.735: 100.0000% ( 1) 00:06:58.274 00:06:58.274 ************************************ 00:06:58.274 END TEST nvme_overhead 00:06:58.274 ************************************ 00:06:58.274 00:06:58.274 real 0m1.232s 00:06:58.274 user 0m1.076s 00:06:58.274 sys 0m0.103s 00:06:58.274 06:05:17 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.274 06:05:17 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:06:58.274 06:05:17 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:06:58.274 06:05:17 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:58.274 06:05:17 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.274 06:05:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.274 ************************************ 00:06:58.274 START TEST nvme_arbitration 00:06:58.274 ************************************ 00:06:58.274 06:05:17 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:01.566 Initializing NVMe Controllers 00:07:01.567 Attached to 0000:00:10.0 00:07:01.567 Attached to 0000:00:11.0 00:07:01.567 Attached to 0000:00:13.0 00:07:01.567 Attached to 0000:00:12.0 00:07:01.567 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:01.567 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:01.567 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:01.567 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:01.567 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:01.567 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:01.567 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:01.567 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:01.567 Initialization complete. Launching workers. 00:07:01.567 Starting thread on core 1 with urgent priority queue 00:07:01.567 Starting thread on core 2 with urgent priority queue 00:07:01.567 Starting thread on core 3 with urgent priority queue 00:07:01.567 Starting thread on core 0 with urgent priority queue 00:07:01.567 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:01.567 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:01.567 QEMU NVMe Ctrl (12341 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:01.567 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:01.567 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:01.567 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:01.567 ======================================================== 00:07:01.567 00:07:01.567 00:07:01.567 real 0m3.331s 00:07:01.567 user 0m9.282s 00:07:01.567 sys 0m0.118s 00:07:01.567 06:05:21 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.567 06:05:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:01.567 ************************************ 00:07:01.567 END TEST nvme_arbitration 00:07:01.567 ************************************ 00:07:01.567 06:05:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:01.567 06:05:21 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:01.567 06:05:21 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.567 06:05:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.567 ************************************ 00:07:01.567 START TEST nvme_single_aen 00:07:01.567 ************************************ 00:07:01.567 06:05:21 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:01.825 Asynchronous Event Request test 00:07:01.825 Attached to 0000:00:10.0 00:07:01.825 Attached to 0000:00:11.0 00:07:01.825 Attached to 0000:00:13.0 00:07:01.825 Attached to 0000:00:12.0 00:07:01.825 Reset controller to setup AER completions for this process 00:07:01.825 Registering asynchronous event callbacks... 00:07:01.825 Getting orig temperature thresholds of all controllers 00:07:01.825 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.825 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.825 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.825 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.825 Setting all controllers temperature threshold low to trigger AER 00:07:01.825 Waiting for all controllers temperature threshold to be set lower 00:07:01.825 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.825 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:01.825 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.825 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:01.825 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.825 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:01.825 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.825 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:01.825 Waiting for all controllers to trigger AER and reset threshold 00:07:01.825 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.825 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.825 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.825 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.825 Cleaning up... 00:07:01.825 ************************************ 00:07:01.825 END TEST nvme_single_aen 00:07:01.825 ************************************ 00:07:01.825 00:07:01.825 real 0m0.210s 00:07:01.825 user 0m0.078s 00:07:01.825 sys 0m0.087s 00:07:01.825 06:05:21 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.825 06:05:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:01.825 06:05:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:01.825 06:05:21 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.825 06:05:21 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.825 06:05:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.825 ************************************ 00:07:01.825 START TEST nvme_doorbell_aers 00:07:01.825 ************************************ 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:01.825 06:05:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:02.083 [2024-11-20 06:05:21.614031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:12.049 Executing: test_write_invalid_db 00:07:12.049 Waiting for AER completion... 00:07:12.049 Failure: test_write_invalid_db 00:07:12.049 00:07:12.049 Executing: test_invalid_db_write_overflow_sq 00:07:12.049 Waiting for AER completion... 00:07:12.049 Failure: test_invalid_db_write_overflow_sq 00:07:12.049 00:07:12.049 Executing: test_invalid_db_write_overflow_cq 00:07:12.049 Waiting for AER completion... 00:07:12.049 Failure: test_invalid_db_write_overflow_cq 00:07:12.049 00:07:12.049 06:05:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:12.049 06:05:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:12.049 [2024-11-20 06:05:31.652352] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:22.016 Executing: test_write_invalid_db 00:07:22.016 Waiting for AER completion... 00:07:22.016 Failure: test_write_invalid_db 00:07:22.016 00:07:22.016 Executing: test_invalid_db_write_overflow_sq 00:07:22.016 Waiting for AER completion... 00:07:22.016 Failure: test_invalid_db_write_overflow_sq 00:07:22.016 00:07:22.016 Executing: test_invalid_db_write_overflow_cq 00:07:22.016 Waiting for AER completion... 00:07:22.016 Failure: test_invalid_db_write_overflow_cq 00:07:22.016 00:07:22.016 06:05:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:22.016 06:05:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:22.274 [2024-11-20 06:05:41.707603] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:32.232 Executing: test_write_invalid_db 00:07:32.232 Waiting for AER completion... 00:07:32.232 Failure: test_write_invalid_db 00:07:32.232 00:07:32.232 Executing: test_invalid_db_write_overflow_sq 00:07:32.232 Waiting for AER completion... 00:07:32.232 Failure: test_invalid_db_write_overflow_sq 00:07:32.232 00:07:32.232 Executing: test_invalid_db_write_overflow_cq 00:07:32.232 Waiting for AER completion... 00:07:32.232 Failure: test_invalid_db_write_overflow_cq 00:07:32.232 00:07:32.232 06:05:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:32.232 06:05:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:32.232 [2024-11-20 06:05:51.727536] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.199 Executing: test_write_invalid_db 00:07:42.199 Waiting for AER completion... 00:07:42.199 Failure: test_write_invalid_db 00:07:42.199 00:07:42.199 Executing: test_invalid_db_write_overflow_sq 00:07:42.199 Waiting for AER completion... 00:07:42.199 Failure: test_invalid_db_write_overflow_sq 00:07:42.199 00:07:42.199 Executing: test_invalid_db_write_overflow_cq 00:07:42.199 Waiting for AER completion... 00:07:42.199 Failure: test_invalid_db_write_overflow_cq 00:07:42.199 00:07:42.199 00:07:42.199 real 0m40.185s 00:07:42.199 user 0m34.193s 00:07:42.199 sys 0m5.603s 00:07:42.199 06:06:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.199 06:06:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:42.199 ************************************ 00:07:42.199 END TEST nvme_doorbell_aers 00:07:42.199 ************************************ 00:07:42.199 06:06:01 nvme -- nvme/nvme.sh@97 -- # uname 00:07:42.199 06:06:01 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:42.199 06:06:01 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:42.199 06:06:01 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:42.199 06:06:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.199 06:06:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.199 ************************************ 00:07:42.199 START TEST nvme_multi_aen 00:07:42.199 ************************************ 00:07:42.199 06:06:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:42.199 [2024-11-20 06:06:01.766913] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.199 [2024-11-20 06:06:01.766989] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.199 [2024-11-20 06:06:01.767000] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.768191] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.768213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.768221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.769138] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.769162] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.769170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.770079] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.770100] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 [2024-11-20 06:06:01.770108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63297) is not found. Dropping the request. 00:07:42.200 Child process pid: 63823 00:07:42.458 [Child] Asynchronous Event Request test 00:07:42.458 [Child] Attached to 0000:00:10.0 00:07:42.458 [Child] Attached to 0000:00:11.0 00:07:42.458 [Child] Attached to 0000:00:13.0 00:07:42.458 [Child] Attached to 0000:00:12.0 00:07:42.458 [Child] Registering asynchronous event callbacks... 00:07:42.458 [Child] Getting orig temperature thresholds of all controllers 00:07:42.458 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:42.458 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 [Child] Cleaning up... 00:07:42.458 Asynchronous Event Request test 00:07:42.458 Attached to 0000:00:10.0 00:07:42.458 Attached to 0000:00:11.0 00:07:42.458 Attached to 0000:00:13.0 00:07:42.458 Attached to 0000:00:12.0 00:07:42.458 Reset controller to setup AER completions for this process 00:07:42.458 Registering asynchronous event callbacks... 00:07:42.458 Getting orig temperature thresholds of all controllers 00:07:42.458 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.458 Setting all controllers temperature threshold low to trigger AER 00:07:42.458 Waiting for all controllers temperature threshold to be set lower 00:07:42.458 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:42.458 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:42.458 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:42.458 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.458 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:42.458 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 Waiting for all controllers to trigger AER and reset threshold 00:07:42.458 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.458 Cleaning up... 00:07:42.458 00:07:42.458 real 0m0.426s 00:07:42.458 user 0m0.135s 00:07:42.458 sys 0m0.184s 00:07:42.458 06:06:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.458 06:06:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:42.458 ************************************ 00:07:42.458 END TEST nvme_multi_aen 00:07:42.458 ************************************ 00:07:42.458 06:06:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:42.458 06:06:02 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:42.458 06:06:02 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.458 06:06:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.458 ************************************ 00:07:42.458 START TEST nvme_startup 00:07:42.458 ************************************ 00:07:42.458 06:06:02 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:42.716 Initializing NVMe Controllers 00:07:42.716 Attached to 0000:00:10.0 00:07:42.716 Attached to 0000:00:11.0 00:07:42.716 Attached to 0000:00:13.0 00:07:42.716 Attached to 0000:00:12.0 00:07:42.716 Initialization complete. 00:07:42.716 Time used:165864.906 (us). 00:07:42.716 00:07:42.716 real 0m0.222s 00:07:42.716 user 0m0.068s 00:07:42.716 sys 0m0.093s 00:07:42.716 06:06:02 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.716 06:06:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:42.716 ************************************ 00:07:42.716 END TEST nvme_startup 00:07:42.716 ************************************ 00:07:42.717 06:06:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:42.717 06:06:02 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:42.717 06:06:02 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.717 06:06:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.717 ************************************ 00:07:42.717 START TEST nvme_multi_secondary 00:07:42.717 ************************************ 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63879 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63880 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:42.717 06:06:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:46.900 Initializing NVMe Controllers 00:07:46.900 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.900 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:46.900 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:46.900 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:46.900 Initialization complete. Launching workers. 00:07:46.900 ======================================================== 00:07:46.900 Latency(us) 00:07:46.900 Device Information : IOPS MiB/s Average min max 00:07:46.900 PCIE (0000:00:10.0) NSID 1 from core 2: 3232.43 12.63 4948.56 1135.73 22138.41 00:07:46.900 PCIE (0000:00:11.0) NSID 1 from core 2: 3232.43 12.63 4949.81 1149.36 21568.37 00:07:46.900 PCIE (0000:00:13.0) NSID 1 from core 2: 3232.43 12.63 4950.12 1221.99 21515.15 00:07:46.900 PCIE (0000:00:12.0) NSID 1 from core 2: 3232.43 12.63 4950.35 1280.02 22038.60 00:07:46.900 PCIE (0000:00:12.0) NSID 2 from core 2: 3232.43 12.63 4950.51 1228.80 22031.26 00:07:46.900 PCIE (0000:00:12.0) NSID 3 from core 2: 3232.43 12.63 4956.77 1149.52 22664.07 00:07:46.900 ======================================================== 00:07:46.900 Total : 19394.60 75.76 4951.02 1135.73 22664.07 00:07:46.900 00:07:46.900 06:06:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63879 00:07:46.900 Initializing NVMe Controllers 00:07:46.900 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.900 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.900 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:46.900 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:46.900 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:46.900 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:46.900 Initialization complete. Launching workers. 00:07:46.900 ======================================================== 00:07:46.900 Latency(us) 00:07:46.900 Device Information : IOPS MiB/s Average min max 00:07:46.900 PCIE (0000:00:10.0) NSID 1 from core 1: 7865.60 30.73 2032.76 926.22 9430.97 00:07:46.900 PCIE (0000:00:11.0) NSID 1 from core 1: 7865.60 30.73 2033.77 978.41 9449.70 00:07:46.900 PCIE (0000:00:13.0) NSID 1 from core 1: 7865.60 30.73 2033.71 962.16 10199.00 00:07:46.900 PCIE (0000:00:12.0) NSID 1 from core 1: 7865.60 30.73 2033.67 951.28 10280.62 00:07:46.900 PCIE (0000:00:12.0) NSID 2 from core 1: 7865.60 30.73 2033.64 941.49 9829.34 00:07:46.900 PCIE (0000:00:12.0) NSID 3 from core 1: 7865.60 30.73 2033.60 895.52 9492.11 00:07:46.900 ======================================================== 00:07:46.900 Total : 47193.63 184.35 2033.52 895.52 10280.62 00:07:46.900 00:07:48.274 Initializing NVMe Controllers 00:07:48.274 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.274 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.274 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.274 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.274 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:48.274 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:48.274 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:48.274 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:48.274 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:48.274 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:48.274 Initialization complete. Launching workers. 00:07:48.274 ======================================================== 00:07:48.274 Latency(us) 00:07:48.274 Device Information : IOPS MiB/s Average min max 00:07:48.274 PCIE (0000:00:10.0) NSID 1 from core 0: 10754.23 42.01 1486.52 686.60 9353.80 00:07:48.274 PCIE (0000:00:11.0) NSID 1 from core 0: 10754.23 42.01 1487.40 701.27 9099.65 00:07:48.274 PCIE (0000:00:13.0) NSID 1 from core 0: 10754.23 42.01 1487.38 633.30 9768.02 00:07:48.274 PCIE (0000:00:12.0) NSID 1 from core 0: 10754.23 42.01 1487.37 607.20 9573.67 00:07:48.274 PCIE (0000:00:12.0) NSID 2 from core 0: 10754.23 42.01 1487.35 573.62 9367.06 00:07:48.274 PCIE (0000:00:12.0) NSID 3 from core 0: 10754.23 42.01 1487.34 546.88 9180.14 00:07:48.274 ======================================================== 00:07:48.274 Total : 64525.41 252.05 1487.23 546.88 9768.02 00:07:48.274 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63880 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63949 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63950 00:07:48.274 06:06:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:51.610 Initializing NVMe Controllers 00:07:51.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.610 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.610 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.610 Initialization complete. Launching workers. 00:07:51.610 ======================================================== 00:07:51.610 Latency(us) 00:07:51.610 Device Information : IOPS MiB/s Average min max 00:07:51.610 PCIE (0000:00:10.0) NSID 1 from core 0: 7851.63 30.67 2036.34 712.14 6284.93 00:07:51.610 PCIE (0000:00:11.0) NSID 1 from core 0: 7851.63 30.67 2037.38 728.47 6622.77 00:07:51.610 PCIE (0000:00:13.0) NSID 1 from core 0: 7851.63 30.67 2037.35 709.35 7404.42 00:07:51.610 PCIE (0000:00:12.0) NSID 1 from core 0: 7851.63 30.67 2037.30 706.33 5939.31 00:07:51.610 PCIE (0000:00:12.0) NSID 2 from core 0: 7851.63 30.67 2037.29 731.32 5822.81 00:07:51.610 PCIE (0000:00:12.0) NSID 3 from core 0: 7851.63 30.67 2037.25 733.88 5891.78 00:07:51.610 ======================================================== 00:07:51.610 Total : 47109.79 184.02 2037.15 706.33 7404.42 00:07:51.610 00:07:51.610 Initializing NVMe Controllers 00:07:51.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.610 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:51.610 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:51.610 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:51.610 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:51.610 Initialization complete. Launching workers. 00:07:51.610 ======================================================== 00:07:51.610 Latency(us) 00:07:51.610 Device Information : IOPS MiB/s Average min max 00:07:51.610 PCIE (0000:00:10.0) NSID 1 from core 1: 7887.70 30.81 2026.99 692.83 6419.33 00:07:51.610 PCIE (0000:00:11.0) NSID 1 from core 1: 7887.70 30.81 2027.96 703.11 6744.67 00:07:51.610 PCIE (0000:00:13.0) NSID 1 from core 1: 7887.70 30.81 2027.93 725.34 7521.52 00:07:51.610 PCIE (0000:00:12.0) NSID 1 from core 1: 7887.70 30.81 2027.85 725.07 6003.40 00:07:51.610 PCIE (0000:00:12.0) NSID 2 from core 1: 7887.70 30.81 2027.78 641.53 6295.78 00:07:51.610 PCIE (0000:00:12.0) NSID 3 from core 1: 7887.70 30.81 2027.75 649.15 6159.36 00:07:51.610 ======================================================== 00:07:51.610 Total : 47326.20 184.87 2027.71 641.53 7521.52 00:07:51.610 00:07:53.510 Initializing NVMe Controllers 00:07:53.510 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.510 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:53.510 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.510 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:53.510 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:53.510 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:53.510 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:53.510 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:53.510 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:53.510 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:53.510 Initialization complete. Launching workers. 00:07:53.510 ======================================================== 00:07:53.510 Latency(us) 00:07:53.510 Device Information : IOPS MiB/s Average min max 00:07:53.510 PCIE (0000:00:10.0) NSID 1 from core 2: 4575.57 17.87 3494.58 755.66 16260.16 00:07:53.510 PCIE (0000:00:11.0) NSID 1 from core 2: 4575.57 17.87 3496.42 718.08 14768.18 00:07:53.510 PCIE (0000:00:13.0) NSID 1 from core 2: 4575.57 17.87 3496.02 770.74 17632.78 00:07:53.510 PCIE (0000:00:12.0) NSID 1 from core 2: 4575.57 17.87 3496.14 768.72 14267.31 00:07:53.510 PCIE (0000:00:12.0) NSID 2 from core 2: 4575.57 17.87 3495.92 693.34 13154.14 00:07:53.510 PCIE (0000:00:12.0) NSID 3 from core 2: 4575.57 17.87 3496.20 633.39 16170.61 00:07:53.510 ======================================================== 00:07:53.510 Total : 27453.45 107.24 3495.88 633.39 17632.78 00:07:53.510 00:07:53.510 06:06:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63949 00:07:53.510 ************************************ 00:07:53.510 END TEST nvme_multi_secondary 00:07:53.510 ************************************ 00:07:53.510 06:06:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63950 00:07:53.510 00:07:53.510 real 0m10.503s 00:07:53.510 user 0m18.396s 00:07:53.510 sys 0m0.617s 00:07:53.510 06:06:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.510 06:06:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:07:53.510 06:06:12 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:07:53.510 06:06:12 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:07:53.510 06:06:12 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62900 ]] 00:07:53.510 06:06:12 nvme -- common/autotest_common.sh@1092 -- # kill 62900 00:07:53.510 06:06:12 nvme -- common/autotest_common.sh@1093 -- # wait 62900 00:07:53.510 [2024-11-20 06:06:12.855762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.855843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.855875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.855894] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.858434] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.858514] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.858536] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.858554] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.861254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.861311] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.861331] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.861350] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.863879] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.510 [2024-11-20 06:06:12.864217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.511 [2024-11-20 06:06:12.864240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.511 [2024-11-20 06:06:12.864258] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63822) is not found. Dropping the request. 00:07:53.511 06:06:12 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:07:53.511 06:06:12 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:07:53.511 06:06:12 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:53.511 06:06:12 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.511 06:06:12 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.511 06:06:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.511 ************************************ 00:07:53.511 START TEST bdev_nvme_reset_stuck_adm_cmd 00:07:53.511 ************************************ 00:07:53.511 06:06:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:53.511 * Looking for test storage... 00:07:53.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.511 --rc genhtml_branch_coverage=1 00:07:53.511 --rc genhtml_function_coverage=1 00:07:53.511 --rc genhtml_legend=1 00:07:53.511 --rc geninfo_all_blocks=1 00:07:53.511 --rc geninfo_unexecuted_blocks=1 00:07:53.511 00:07:53.511 ' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.511 --rc genhtml_branch_coverage=1 00:07:53.511 --rc genhtml_function_coverage=1 00:07:53.511 --rc genhtml_legend=1 00:07:53.511 --rc geninfo_all_blocks=1 00:07:53.511 --rc geninfo_unexecuted_blocks=1 00:07:53.511 00:07:53.511 ' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.511 --rc genhtml_branch_coverage=1 00:07:53.511 --rc genhtml_function_coverage=1 00:07:53.511 --rc genhtml_legend=1 00:07:53.511 --rc geninfo_all_blocks=1 00:07:53.511 --rc geninfo_unexecuted_blocks=1 00:07:53.511 00:07:53.511 ' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.511 --rc genhtml_branch_coverage=1 00:07:53.511 --rc genhtml_function_coverage=1 00:07:53.511 --rc genhtml_legend=1 00:07:53.511 --rc geninfo_all_blocks=1 00:07:53.511 --rc geninfo_unexecuted_blocks=1 00:07:53.511 00:07:53.511 ' 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.511 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64106 00:07:53.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64106 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 64106 ']' 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.770 06:06:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:53.770 [2024-11-20 06:06:13.274743] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:07:53.770 [2024-11-20 06:06:13.275062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64106 ] 00:07:54.027 [2024-11-20 06:06:13.455814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.027 [2024-11-20 06:06:13.565096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.027 [2024-11-20 06:06:13.565165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.027 [2024-11-20 06:06:13.565277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.027 [2024-11-20 06:06:13.565304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.594 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.594 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:07:54.594 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:07:54.594 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.594 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 nvme0n1 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_UNzLT.txt 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 true 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732082774 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64129 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:54.852 06:06:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:56.815 [2024-11-20 06:06:16.265756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:56.815 [2024-11-20 06:06:16.266174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:07:56.815 [2024-11-20 06:06:16.266207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:07:56.815 [2024-11-20 06:06:16.266221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.815 [2024-11-20 06:06:16.267921] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:56.815 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64129 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64129 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64129 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_UNzLT.txt 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_UNzLT.txt 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64106 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 64106 ']' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 64106 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64106 00:07:56.815 killing process with pid 64106 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64106' 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 64106 00:07:56.815 06:06:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 64106 00:07:58.712 06:06:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:07:58.712 06:06:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:07:58.712 ************************************ 00:07:58.712 END TEST bdev_nvme_reset_stuck_adm_cmd 00:07:58.712 ************************************ 00:07:58.712 00:07:58.712 real 0m5.052s 00:07:58.712 user 0m17.860s 00:07:58.712 sys 0m0.479s 00:07:58.712 06:06:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.712 06:06:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:58.712 06:06:18 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:07:58.712 06:06:18 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:07:58.712 06:06:18 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.712 06:06:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.712 06:06:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.712 ************************************ 00:07:58.712 START TEST nvme_fio 00:07:58.712 ************************************ 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:58.712 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.712 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:07:59.074 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:59.074 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:07:59.074 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:07:59.075 06:06:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:07:59.075 06:06:18 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:59.333 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:07:59.333 fio-3.35 00:07:59.333 Starting 1 thread 00:08:07.451 00:08:07.451 test: (groupid=0, jobs=1): err= 0: pid=64270: Wed Nov 20 06:06:26 2024 00:08:07.451 read: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(102MiB/2001msec) 00:08:07.451 slat (nsec): min=3449, max=91965, avg=5939.31, stdev=3210.66 00:08:07.451 clat (usec): min=637, max=633693, avg=4144.42, stdev=22955.44 00:08:07.451 lat (usec): min=646, max=633701, avg=4150.36, stdev=22955.51 00:08:07.451 clat percentiles (usec): 00:08:07.451 | 1.00th=[ 1778], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:07.451 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2835], 60.00th=[ 3097], 00:08:07.451 | 70.00th=[ 3490], 80.00th=[ 4228], 90.00th=[ 5145], 95.00th=[ 5866], 00:08:07.451 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[624952], 99.95th=[624952], 00:08:07.451 | 99.99th=[633340] 00:08:07.451 bw ( KiB/s): min=63712, max=77152, per=100.00%, avg=70432.00, stdev=9503.52, samples=2 00:08:07.451 iops : min=15928, max=19288, avg=17608.00, stdev=2375.88, samples=2 00:08:07.451 write: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(102MiB/2001msec); 0 zone resets 00:08:07.451 slat (nsec): min=3507, max=73982, avg=6115.69, stdev=3126.28 00:08:07.451 clat (usec): min=450, max=643099, avg=5660.79, stdev=37807.76 00:08:07.451 lat (usec): min=459, max=643107, avg=5666.91, stdev=37807.86 00:08:07.451 clat percentiles (usec): 00:08:07.451 | 1.00th=[ 1811], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:07.451 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2868], 60.00th=[ 3130], 00:08:07.451 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 6194], 00:08:07.451 | 99.00th=[ 9110], 99.50th=[ 16057], 99.90th=[641729], 99.95th=[641729], 00:08:07.451 | 99.99th=[641729] 00:08:07.451 bw ( KiB/s): min=64464, max=76504, per=100.00%, avg=70484.00, stdev=8513.57, samples=2 00:08:07.451 iops : min=16116, max=19126, avg=17621.00, stdev=2128.39, samples=2 00:08:07.451 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:08:07.451 lat (msec) : 2=1.81%, 4=74.90%, 10=22.75%, 20=0.25%, 750=0.25% 00:08:07.451 cpu : usr=99.00%, sys=0.10%, ctx=4, majf=0, minf=607 00:08:07.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:07.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:07.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:07.451 issued rwts: total=26051,26035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:07.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:07.451 00:08:07.451 Run status group 0 (all jobs): 00:08:07.451 READ: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2001-2001msec 00:08:07.451 WRITE: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2001-2001msec 00:08:07.451 ----------------------------------------------------- 00:08:07.451 Suppressions used: 00:08:07.451 count bytes template 00:08:07.451 1 32 /usr/src/fio/parse.c 00:08:07.451 1 8 libtcmalloc_minimal.so 00:08:07.451 ----------------------------------------------------- 00:08:07.451 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:07.451 06:06:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:07.451 06:06:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:07.451 06:06:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:07.451 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:07.452 06:06:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:07.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:07.711 fio-3.35 00:08:07.711 Starting 1 thread 00:08:17.702 00:08:17.702 test: (groupid=0, jobs=1): err= 0: pid=64325: Wed Nov 20 06:06:36 2024 00:08:17.702 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:08:17.702 slat (nsec): min=4299, max=76767, avg=6176.59, stdev=3455.61 00:08:17.702 clat (usec): min=324, max=11197, avg=3648.17, stdev=1283.94 00:08:17.702 lat (usec): min=330, max=11230, avg=3654.35, stdev=1285.50 00:08:17.702 clat percentiles (usec): 00:08:17.702 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:08:17.702 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 3490], 00:08:17.702 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5604], 95.00th=[ 6325], 00:08:17.702 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[ 9110], 99.95th=[ 9896], 00:08:17.702 | 99.99th=[10814] 00:08:17.703 bw ( KiB/s): min=66976, max=72144, per=99.47%, avg=69301.33, stdev=2622.55, samples=3 00:08:17.703 iops : min=16744, max=18036, avg=17325.33, stdev=655.64, samples=3 00:08:17.703 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2001msec); 0 zone resets 00:08:17.703 slat (usec): min=4, max=119, avg= 6.32, stdev= 3.50 00:08:17.703 clat (usec): min=396, max=10873, avg=3669.27, stdev=1294.61 00:08:17.703 lat (usec): min=403, max=10887, avg=3675.59, stdev=1296.18 00:08:17.703 clat percentiles (usec): 00:08:17.703 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:08:17.703 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 3490], 00:08:17.703 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6390], 00:08:17.703 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 9241], 99.95th=[ 9896], 00:08:17.703 | 99.99th=[10683] 00:08:17.703 bw ( KiB/s): min=67176, max=71960, per=99.23%, avg=69218.67, stdev=2467.34, samples=3 00:08:17.703 iops : min=16794, max=17990, avg=17304.67, stdev=616.83, samples=3 00:08:17.703 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:17.703 lat (msec) : 2=0.68%, 4=69.20%, 10=30.05%, 20=0.04% 00:08:17.703 cpu : usr=98.65%, sys=0.20%, ctx=5, majf=0, minf=607 00:08:17.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:17.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:17.703 issued rwts: total=34853,34895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:17.703 00:08:17.703 Run status group 0 (all jobs): 00:08:17.703 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:17.703 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:17.703 ----------------------------------------------------- 00:08:17.703 Suppressions used: 00:08:17.703 count bytes template 00:08:17.703 1 32 /usr/src/fio/parse.c 00:08:17.703 1 8 libtcmalloc_minimal.so 00:08:17.703 ----------------------------------------------------- 00:08:17.703 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:17.703 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:17.972 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:17.972 06:06:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:17.972 06:06:37 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:18.231 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:18.231 fio-3.35 00:08:18.231 Starting 1 thread 00:08:26.391 00:08:26.391 test: (groupid=0, jobs=1): err= 0: pid=64392: Wed Nov 20 06:06:44 2024 00:08:26.391 read: IOPS=19.2k, BW=75.0MiB/s (78.7MB/s)(150MiB/2001msec) 00:08:26.391 slat (nsec): min=3390, max=74434, avg=5796.47, stdev=3277.90 00:08:26.391 clat (usec): min=316, max=8913, avg=3316.10, stdev=1260.48 00:08:26.391 lat (usec): min=321, max=8929, avg=3321.90, stdev=1262.16 00:08:26.391 clat percentiles (usec): 00:08:26.391 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:26.391 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2769], 60.00th=[ 3032], 00:08:26.391 | 70.00th=[ 3458], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 6128], 00:08:26.391 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8455], 00:08:26.391 | 99.99th=[ 8586] 00:08:26.391 bw ( KiB/s): min=73160, max=78064, per=98.00%, avg=75304.00, stdev=2509.36, samples=3 00:08:26.391 iops : min=18290, max=19516, avg=18826.00, stdev=627.34, samples=3 00:08:26.391 write: IOPS=19.2k, BW=75.0MiB/s (78.6MB/s)(150MiB/2001msec); 0 zone resets 00:08:26.391 slat (nsec): min=3448, max=79243, avg=6006.12, stdev=3267.55 00:08:26.391 clat (usec): min=305, max=9008, avg=3329.96, stdev=1263.84 00:08:26.391 lat (usec): min=311, max=9022, avg=3335.97, stdev=1265.50 00:08:26.391 clat percentiles (usec): 00:08:26.391 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:26.391 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2802], 60.00th=[ 3064], 00:08:26.391 | 70.00th=[ 3490], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 6063], 00:08:26.391 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8455], 00:08:26.391 | 99.99th=[ 8717] 00:08:26.391 bw ( KiB/s): min=73392, max=77888, per=98.16%, avg=75354.67, stdev=2301.68, samples=3 00:08:26.391 iops : min=18348, max=19472, avg=18838.67, stdev=575.42, samples=3 00:08:26.391 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:26.391 lat (msec) : 2=1.12%, 4=75.87%, 10=22.99% 00:08:26.391 cpu : usr=98.85%, sys=0.10%, ctx=7, majf=0, minf=608 00:08:26.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:26.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:26.391 issued rwts: total=38441,38402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:26.391 00:08:26.391 Run status group 0 (all jobs): 00:08:26.391 READ: bw=75.0MiB/s (78.7MB/s), 75.0MiB/s-75.0MiB/s (78.7MB/s-78.7MB/s), io=150MiB (157MB), run=2001-2001msec 00:08:26.391 WRITE: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=150MiB (157MB), run=2001-2001msec 00:08:26.391 ----------------------------------------------------- 00:08:26.391 Suppressions used: 00:08:26.391 count bytes template 00:08:26.391 1 32 /usr/src/fio/parse.c 00:08:26.391 1 8 libtcmalloc_minimal.so 00:08:26.391 ----------------------------------------------------- 00:08:26.391 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:26.391 06:06:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:26.391 06:06:45 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.391 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:26.391 fio-3.35 00:08:26.391 Starting 1 thread 00:08:38.634 00:08:38.634 test: (groupid=0, jobs=1): err= 0: pid=64458: Wed Nov 20 06:06:57 2024 00:08:38.634 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(118MiB/2001msec) 00:08:38.634 slat (usec): min=4, max=476, avg= 7.34, stdev= 4.81 00:08:38.634 clat (usec): min=349, max=10728, avg=4194.77, stdev=1353.22 00:08:38.634 lat (usec): min=354, max=10734, avg=4202.11, stdev=1354.71 00:08:38.634 clat percentiles (usec): 00:08:38.634 | 1.00th=[ 2311], 5.00th=[ 2737], 10.00th=[ 2900], 20.00th=[ 3097], 00:08:38.634 | 30.00th=[ 3294], 40.00th=[ 3523], 50.00th=[ 3785], 60.00th=[ 4113], 00:08:38.634 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6259], 95.00th=[ 6915], 00:08:38.634 | 99.00th=[ 8225], 99.50th=[ 8848], 99.90th=[ 9896], 99.95th=[10159], 00:08:38.634 | 99.99th=[10290] 00:08:38.634 bw ( KiB/s): min=53864, max=62248, per=97.24%, avg=58936.00, stdev=4460.50, samples=3 00:08:38.634 iops : min=13466, max=15562, avg=14734.00, stdev=1115.13, samples=3 00:08:38.634 write: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:08:38.634 slat (nsec): min=4979, max=93979, avg=7587.08, stdev=4176.12 00:08:38.634 clat (usec): min=209, max=10319, avg=4210.70, stdev=1329.79 00:08:38.634 lat (usec): min=215, max=10337, avg=4218.28, stdev=1331.30 00:08:38.634 clat percentiles (usec): 00:08:38.634 | 1.00th=[ 2343], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3130], 00:08:38.634 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 3818], 60.00th=[ 4146], 00:08:38.634 | 70.00th=[ 4621], 80.00th=[ 5276], 90.00th=[ 6259], 95.00th=[ 6849], 00:08:38.634 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9765], 00:08:38.634 | 99.99th=[10159] 00:08:38.634 bw ( KiB/s): min=54096, max=61872, per=96.76%, avg=58746.67, stdev=4106.28, samples=3 00:08:38.634 iops : min=13524, max=15468, avg=14686.67, stdev=1026.57, samples=3 00:08:38.634 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:08:38.634 lat (msec) : 2=0.46%, 4=55.98%, 10=43.46%, 20=0.05% 00:08:38.634 cpu : usr=98.10%, sys=0.30%, ctx=2, majf=0, minf=605 00:08:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.634 issued rwts: total=30318,30372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.634 00:08:38.634 Run status group 0 (all jobs): 00:08:38.634 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=118MiB (124MB), run=2001-2001msec 00:08:38.634 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:08:38.634 ----------------------------------------------------- 00:08:38.634 Suppressions used: 00:08:38.634 count bytes template 00:08:38.634 1 32 /usr/src/fio/parse.c 00:08:38.634 1 8 libtcmalloc_minimal.so 00:08:38.634 ----------------------------------------------------- 00:08:38.634 00:08:38.634 06:06:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:38.634 06:06:58 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:38.634 00:08:38.634 real 0m40.028s 00:08:38.634 user 0m17.372s 00:08:38.634 sys 0m44.389s 00:08:38.634 06:06:58 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.634 ************************************ 00:08:38.634 06:06:58 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:38.634 END TEST nvme_fio 00:08:38.634 ************************************ 00:08:38.634 ************************************ 00:08:38.634 END TEST nvme 00:08:38.634 ************************************ 00:08:38.634 00:08:38.634 real 1m49.904s 00:08:38.634 user 3m40.211s 00:08:38.634 sys 0m54.758s 00:08:38.634 06:06:58 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.634 06:06:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.634 06:06:58 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:38.634 06:06:58 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:38.634 06:06:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.634 06:06:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.634 06:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:38.634 ************************************ 00:08:38.634 START TEST nvme_scc 00:08:38.634 ************************************ 00:08:38.634 06:06:58 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:38.927 * Looking for test storage... 00:08:38.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.927 06:06:58 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:38.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.927 --rc genhtml_branch_coverage=1 00:08:38.927 --rc genhtml_function_coverage=1 00:08:38.927 --rc genhtml_legend=1 00:08:38.927 --rc geninfo_all_blocks=1 00:08:38.927 --rc geninfo_unexecuted_blocks=1 00:08:38.927 00:08:38.927 ' 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:38.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.927 --rc genhtml_branch_coverage=1 00:08:38.927 --rc genhtml_function_coverage=1 00:08:38.927 --rc genhtml_legend=1 00:08:38.927 --rc geninfo_all_blocks=1 00:08:38.927 --rc geninfo_unexecuted_blocks=1 00:08:38.927 00:08:38.927 ' 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:38.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.927 --rc genhtml_branch_coverage=1 00:08:38.927 --rc genhtml_function_coverage=1 00:08:38.927 --rc genhtml_legend=1 00:08:38.927 --rc geninfo_all_blocks=1 00:08:38.927 --rc geninfo_unexecuted_blocks=1 00:08:38.927 00:08:38.927 ' 00:08:38.927 06:06:58 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:38.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.928 --rc genhtml_branch_coverage=1 00:08:38.928 --rc genhtml_function_coverage=1 00:08:38.928 --rc genhtml_legend=1 00:08:38.928 --rc geninfo_all_blocks=1 00:08:38.928 --rc geninfo_unexecuted_blocks=1 00:08:38.928 00:08:38.928 ' 00:08:38.928 06:06:58 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.928 06:06:58 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.928 06:06:58 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.928 06:06:58 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.928 06:06:58 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.928 06:06:58 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.928 06:06:58 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.928 06:06:58 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.928 06:06:58 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:38.928 06:06:58 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:38.928 06:06:58 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:38.928 06:06:58 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.928 06:06:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:38.928 06:06:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:38.928 06:06:58 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:38.928 06:06:58 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.448 Waiting for block devices as requested 00:08:39.448 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.448 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.706 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.994 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:44.994 06:07:04 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:44.994 06:07:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.994 06:07:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:44.994 06:07:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.994 06:07:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:44.994 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:44.995 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.996 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.997 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:44.998 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:44.999 06:07:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.999 06:07:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:44.999 06:07:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.999 06:07:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.999 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.000 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.001 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.002 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.003 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:45.004 06:07:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:45.004 06:07:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:45.004 06:07:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.004 06:07:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.004 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:45.005 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.006 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.007 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:45.008 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.009 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.010 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:45.011 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.012 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:45.013 06:07:04 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:45.013 06:07:04 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:45.013 06:07:04 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.013 06:07:04 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.013 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.014 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:45.015 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.278 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.279 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:45.280 06:07:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:45.280 06:07:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:45.280 06:07:04 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:45.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.422 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.422 06:07:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:46.422 06:07:05 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:46.422 06:07:05 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.422 06:07:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:46.422 ************************************ 00:08:46.422 START TEST nvme_simple_copy 00:08:46.422 ************************************ 00:08:46.422 06:07:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:46.681 Initializing NVMe Controllers 00:08:46.681 Attaching to 0000:00:10.0 00:08:46.681 Controller supports SCC. Attached to 0000:00:10.0 00:08:46.681 Namespace ID: 1 size: 6GB 00:08:46.681 Initialization complete. 00:08:46.681 00:08:46.681 Controller QEMU NVMe Ctrl (12340 ) 00:08:46.681 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:46.681 Namespace Block Size:4096 00:08:46.681 Writing LBAs 0 to 63 with Random Data 00:08:46.681 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:46.681 LBAs matching Written Data: 64 00:08:46.681 00:08:46.681 real 0m0.323s 00:08:46.681 user 0m0.140s 00:08:46.681 sys 0m0.081s 00:08:46.681 06:07:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.681 ************************************ 00:08:46.681 END TEST nvme_simple_copy 00:08:46.681 ************************************ 00:08:46.681 06:07:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:46.941 ************************************ 00:08:46.941 END TEST nvme_scc 00:08:46.941 ************************************ 00:08:46.941 00:08:46.941 real 0m8.114s 00:08:46.941 user 0m1.136s 00:08:46.941 sys 0m1.572s 00:08:46.941 06:07:06 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.941 06:07:06 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:46.941 06:07:06 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:46.941 06:07:06 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:46.941 06:07:06 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:46.941 06:07:06 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:46.941 06:07:06 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:46.941 06:07:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.941 06:07:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.941 06:07:06 -- common/autotest_common.sh@10 -- # set +x 00:08:46.941 ************************************ 00:08:46.941 START TEST nvme_fdp 00:08:46.941 ************************************ 00:08:46.941 06:07:06 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:08:46.941 * Looking for test storage... 00:08:46.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.941 06:07:06 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:46.941 06:07:06 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:46.941 06:07:06 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.201 --rc genhtml_branch_coverage=1 00:08:47.201 --rc genhtml_function_coverage=1 00:08:47.201 --rc genhtml_legend=1 00:08:47.201 --rc geninfo_all_blocks=1 00:08:47.201 --rc geninfo_unexecuted_blocks=1 00:08:47.201 00:08:47.201 ' 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.201 --rc genhtml_branch_coverage=1 00:08:47.201 --rc genhtml_function_coverage=1 00:08:47.201 --rc genhtml_legend=1 00:08:47.201 --rc geninfo_all_blocks=1 00:08:47.201 --rc geninfo_unexecuted_blocks=1 00:08:47.201 00:08:47.201 ' 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.201 --rc genhtml_branch_coverage=1 00:08:47.201 --rc genhtml_function_coverage=1 00:08:47.201 --rc genhtml_legend=1 00:08:47.201 --rc geninfo_all_blocks=1 00:08:47.201 --rc geninfo_unexecuted_blocks=1 00:08:47.201 00:08:47.201 ' 00:08:47.201 06:07:06 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.201 --rc genhtml_branch_coverage=1 00:08:47.201 --rc genhtml_function_coverage=1 00:08:47.201 --rc genhtml_legend=1 00:08:47.201 --rc geninfo_all_blocks=1 00:08:47.201 --rc geninfo_unexecuted_blocks=1 00:08:47.201 00:08:47.201 ' 00:08:47.201 06:07:06 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.201 06:07:06 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.201 06:07:06 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.201 06:07:06 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.201 06:07:06 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.201 06:07:06 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:47.201 06:07:06 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:47.201 06:07:06 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:47.201 06:07:06 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.201 06:07:06 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.722 Waiting for block devices as requested 00:08:47.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.722 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.982 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.281 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:53.281 06:07:12 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:53.281 06:07:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:53.281 06:07:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:53.281 06:07:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:53.281 06:07:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.281 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.282 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:53.283 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:53.284 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.285 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:53.286 06:07:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:53.286 06:07:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:53.286 06:07:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:53.286 06:07:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.286 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:53.287 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.288 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.289 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.290 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:53.291 06:07:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:53.291 06:07:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:53.291 06:07:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:53.291 06:07:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:53.291 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:53.292 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.293 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.294 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.295 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.296 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:53.297 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.298 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.299 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:53.300 06:07:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:53.300 06:07:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:53.300 06:07:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:53.300 06:07:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.300 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:53.301 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.302 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:53.303 06:07:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:53.303 06:07:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:53.565 06:07:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:53.565 06:07:12 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:53.565 06:07:12 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:53.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.398 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.398 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.398 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.398 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.659 06:07:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:54.659 06:07:14 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:54.659 06:07:14 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.659 06:07:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:54.659 ************************************ 00:08:54.659 START TEST nvme_flexible_data_placement 00:08:54.659 ************************************ 00:08:54.659 06:07:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:54.917 Initializing NVMe Controllers 00:08:54.917 Attaching to 0000:00:13.0 00:08:54.917 Controller supports FDP Attached to 0000:00:13.0 00:08:54.917 Namespace ID: 1 Endurance Group ID: 1 00:08:54.917 Initialization complete. 00:08:54.917 00:08:54.917 ================================== 00:08:54.917 == FDP tests for Namespace: #01 == 00:08:54.918 ================================== 00:08:54.918 00:08:54.918 Get Feature: FDP: 00:08:54.918 ================= 00:08:54.918 Enabled: Yes 00:08:54.918 FDP configuration Index: 0 00:08:54.918 00:08:54.918 FDP configurations log page 00:08:54.918 =========================== 00:08:54.918 Number of FDP configurations: 1 00:08:54.918 Version: 0 00:08:54.918 Size: 112 00:08:54.918 FDP Configuration Descriptor: 0 00:08:54.918 Descriptor Size: 96 00:08:54.918 Reclaim Group Identifier format: 2 00:08:54.918 FDP Volatile Write Cache: Not Present 00:08:54.918 FDP Configuration: Valid 00:08:54.918 Vendor Specific Size: 0 00:08:54.918 Number of Reclaim Groups: 2 00:08:54.918 Number of Recalim Unit Handles: 8 00:08:54.918 Max Placement Identifiers: 128 00:08:54.918 Number of Namespaces Suppprted: 256 00:08:54.918 Reclaim unit Nominal Size: 6000000 bytes 00:08:54.918 Estimated Reclaim Unit Time Limit: Not Reported 00:08:54.918 RUH Desc #000: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #001: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #002: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #003: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #004: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #005: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #006: RUH Type: Initially Isolated 00:08:54.918 RUH Desc #007: RUH Type: Initially Isolated 00:08:54.918 00:08:54.918 FDP reclaim unit handle usage log page 00:08:54.918 ====================================== 00:08:54.918 Number of Reclaim Unit Handles: 8 00:08:54.918 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:54.918 RUH Usage Desc #001: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #002: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #003: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #004: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #005: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #006: RUH Attributes: Unused 00:08:54.918 RUH Usage Desc #007: RUH Attributes: Unused 00:08:54.918 00:08:54.918 FDP statistics log page 00:08:54.918 ======================= 00:08:54.918 Host bytes with metadata written: 936308736 00:08:54.918 Media bytes with metadata written: 936546304 00:08:54.918 Media bytes erased: 0 00:08:54.918 00:08:54.918 FDP Reclaim unit handle status 00:08:54.918 ============================== 00:08:54.918 Number of RUHS descriptors: 2 00:08:54.918 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004311 00:08:54.918 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:54.918 00:08:54.918 FDP write on placement id: 0 success 00:08:54.918 00:08:54.918 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:54.918 00:08:54.918 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:54.918 00:08:54.918 Get Feature: FDP Events for Placement handle: #0 00:08:54.918 ======================== 00:08:54.918 Number of FDP Events: 6 00:08:54.918 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:54.918 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:54.918 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:54.918 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:54.918 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:54.918 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:54.918 00:08:54.918 FDP events log page 00:08:54.918 =================== 00:08:54.918 Number of FDP events: 1 00:08:54.918 FDP Event #0: 00:08:54.918 Event Type: RU Not Written to Capacity 00:08:54.918 Placement Identifier: Valid 00:08:54.918 NSID: Valid 00:08:54.918 Location: Valid 00:08:54.918 Placement Identifier: 0 00:08:54.918 Event Timestamp: 8 00:08:54.918 Namespace Identifier: 1 00:08:54.918 Reclaim Group Identifier: 0 00:08:54.918 Reclaim Unit Handle Identifier: 0 00:08:54.918 00:08:54.918 FDP test passed 00:08:54.918 00:08:54.918 real 0m0.257s 00:08:54.918 user 0m0.083s 00:08:54.918 sys 0m0.071s 00:08:54.918 06:07:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.918 ************************************ 00:08:54.918 END TEST nvme_flexible_data_placement 00:08:54.918 ************************************ 00:08:54.918 06:07:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:54.918 ************************************ 00:08:54.918 END TEST nvme_fdp 00:08:54.918 ************************************ 00:08:54.918 00:08:54.918 real 0m8.015s 00:08:54.918 user 0m1.170s 00:08:54.918 sys 0m1.537s 00:08:54.918 06:07:14 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.918 06:07:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:54.918 06:07:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:54.918 06:07:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:54.918 06:07:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:54.918 06:07:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.918 06:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:54.918 ************************************ 00:08:54.918 START TEST nvme_rpc 00:08:54.918 ************************************ 00:08:54.918 06:07:14 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:55.234 * Looking for test storage... 00:08:55.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.234 06:07:14 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.234 --rc genhtml_branch_coverage=1 00:08:55.234 --rc genhtml_function_coverage=1 00:08:55.234 --rc genhtml_legend=1 00:08:55.234 --rc geninfo_all_blocks=1 00:08:55.234 --rc geninfo_unexecuted_blocks=1 00:08:55.234 00:08:55.234 ' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.234 --rc genhtml_branch_coverage=1 00:08:55.234 --rc genhtml_function_coverage=1 00:08:55.234 --rc genhtml_legend=1 00:08:55.234 --rc geninfo_all_blocks=1 00:08:55.234 --rc geninfo_unexecuted_blocks=1 00:08:55.234 00:08:55.234 ' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.234 --rc genhtml_branch_coverage=1 00:08:55.234 --rc genhtml_function_coverage=1 00:08:55.234 --rc genhtml_legend=1 00:08:55.234 --rc geninfo_all_blocks=1 00:08:55.234 --rc geninfo_unexecuted_blocks=1 00:08:55.234 00:08:55.234 ' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.234 --rc genhtml_branch_coverage=1 00:08:55.234 --rc genhtml_function_coverage=1 00:08:55.234 --rc genhtml_legend=1 00:08:55.234 --rc geninfo_all_blocks=1 00:08:55.234 --rc geninfo_unexecuted_blocks=1 00:08:55.234 00:08:55.234 ' 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:55.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65821 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65821 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65821 ']' 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.234 06:07:14 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.234 06:07:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.234 [2024-11-20 06:07:14.834165] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:55.234 [2024-11-20 06:07:14.834333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65821 ] 00:08:55.507 [2024-11-20 06:07:15.001290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.767 [2024-11-20 06:07:15.141647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.767 [2024-11-20 06:07:15.141916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.336 06:07:15 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.336 06:07:15 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:56.336 06:07:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:56.594 Nvme0n1 00:08:56.594 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:56.594 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:56.852 request: 00:08:56.852 { 00:08:56.852 "bdev_name": "Nvme0n1", 00:08:56.852 "filename": "non_existing_file", 00:08:56.852 "method": "bdev_nvme_apply_firmware", 00:08:56.852 "req_id": 1 00:08:56.852 } 00:08:56.852 Got JSON-RPC error response 00:08:56.852 response: 00:08:56.852 { 00:08:56.852 "code": -32603, 00:08:56.852 "message": "open file failed." 00:08:56.852 } 00:08:56.852 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:56.852 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:56.852 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:57.110 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:57.110 06:07:16 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65821 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65821 ']' 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65821 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65821 00:08:57.110 killing process with pid 65821 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65821' 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65821 00:08:57.110 06:07:16 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65821 00:08:59.024 00:08:59.024 real 0m3.741s 00:08:59.024 user 0m7.027s 00:08:59.024 sys 0m0.657s 00:08:59.024 ************************************ 00:08:59.024 END TEST nvme_rpc 00:08:59.024 ************************************ 00:08:59.024 06:07:18 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.024 06:07:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.024 06:07:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:59.024 06:07:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.024 06:07:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.024 06:07:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.024 ************************************ 00:08:59.024 START TEST nvme_rpc_timeouts 00:08:59.024 ************************************ 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:59.024 * Looking for test storage... 00:08:59.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.024 06:07:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.024 --rc genhtml_branch_coverage=1 00:08:59.024 --rc genhtml_function_coverage=1 00:08:59.024 --rc genhtml_legend=1 00:08:59.024 --rc geninfo_all_blocks=1 00:08:59.024 --rc geninfo_unexecuted_blocks=1 00:08:59.024 00:08:59.024 ' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.024 --rc genhtml_branch_coverage=1 00:08:59.024 --rc genhtml_function_coverage=1 00:08:59.024 --rc genhtml_legend=1 00:08:59.024 --rc geninfo_all_blocks=1 00:08:59.024 --rc geninfo_unexecuted_blocks=1 00:08:59.024 00:08:59.024 ' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.024 --rc genhtml_branch_coverage=1 00:08:59.024 --rc genhtml_function_coverage=1 00:08:59.024 --rc genhtml_legend=1 00:08:59.024 --rc geninfo_all_blocks=1 00:08:59.024 --rc geninfo_unexecuted_blocks=1 00:08:59.024 00:08:59.024 ' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.024 --rc genhtml_branch_coverage=1 00:08:59.024 --rc genhtml_function_coverage=1 00:08:59.024 --rc genhtml_legend=1 00:08:59.024 --rc geninfo_all_blocks=1 00:08:59.024 --rc geninfo_unexecuted_blocks=1 00:08:59.024 00:08:59.024 ' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65891 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65891 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65929 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65929 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65929 ']' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.024 06:07:18 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:59.024 06:07:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:59.024 [2024-11-20 06:07:18.580282] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:08:59.024 [2024-11-20 06:07:18.580519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65929 ] 00:08:59.286 [2024-11-20 06:07:18.762626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.548 [2024-11-20 06:07:18.922184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.548 [2024-11-20 06:07:18.922218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.142 Checking default timeout settings: 00:09:00.142 06:07:19 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.142 06:07:19 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:00.142 06:07:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:00.142 06:07:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:00.404 Making settings changes with rpc: 00:09:00.404 06:07:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:00.404 06:07:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:00.667 Check default vs. modified settings: 00:09:00.667 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:00.667 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 Setting action_on_timeout is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 Setting timeout_us is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:01.239 Setting timeout_admin_us is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65891 /tmp/settings_modified_65891 00:09:01.239 06:07:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65929 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65929 ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65929 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65929 00:09:01.239 killing process with pid 65929 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65929' 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65929 00:09:01.239 06:07:20 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65929 00:09:03.153 RPC TIMEOUT SETTING TEST PASSED. 00:09:03.153 06:07:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:03.153 ************************************ 00:09:03.153 END TEST nvme_rpc_timeouts 00:09:03.153 ************************************ 00:09:03.153 00:09:03.153 real 0m4.039s 00:09:03.153 user 0m7.627s 00:09:03.153 sys 0m0.662s 00:09:03.153 06:07:22 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.153 06:07:22 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:03.153 06:07:22 -- spdk/autotest.sh@239 -- # uname -s 00:09:03.153 06:07:22 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:03.153 06:07:22 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:03.153 06:07:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.153 06:07:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.153 06:07:22 -- common/autotest_common.sh@10 -- # set +x 00:09:03.153 ************************************ 00:09:03.153 START TEST sw_hotplug 00:09:03.153 ************************************ 00:09:03.153 06:07:22 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:03.153 * Looking for test storage... 00:09:03.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:03.153 06:07:22 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.153 06:07:22 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.153 06:07:22 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.153 06:07:22 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.153 06:07:22 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.153 06:07:22 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.153 06:07:22 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.153 06:07:22 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.153 06:07:22 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.154 06:07:22 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:03.154 06:07:22 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.154 06:07:22 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.154 --rc genhtml_branch_coverage=1 00:09:03.154 --rc genhtml_function_coverage=1 00:09:03.154 --rc genhtml_legend=1 00:09:03.154 --rc geninfo_all_blocks=1 00:09:03.154 --rc geninfo_unexecuted_blocks=1 00:09:03.154 00:09:03.154 ' 00:09:03.154 06:07:22 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.154 --rc genhtml_branch_coverage=1 00:09:03.154 --rc genhtml_function_coverage=1 00:09:03.154 --rc genhtml_legend=1 00:09:03.154 --rc geninfo_all_blocks=1 00:09:03.154 --rc geninfo_unexecuted_blocks=1 00:09:03.154 00:09:03.154 ' 00:09:03.154 06:07:22 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.154 --rc genhtml_branch_coverage=1 00:09:03.154 --rc genhtml_function_coverage=1 00:09:03.154 --rc genhtml_legend=1 00:09:03.154 --rc geninfo_all_blocks=1 00:09:03.154 --rc geninfo_unexecuted_blocks=1 00:09:03.154 00:09:03.154 ' 00:09:03.154 06:07:22 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.154 --rc genhtml_branch_coverage=1 00:09:03.154 --rc genhtml_function_coverage=1 00:09:03.154 --rc genhtml_legend=1 00:09:03.154 --rc geninfo_all_blocks=1 00:09:03.154 --rc geninfo_unexecuted_blocks=1 00:09:03.154 00:09:03.154 ' 00:09:03.154 06:07:22 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:03.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.414 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:03.414 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:03.414 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:03.414 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:03.675 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:03.675 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:03.675 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:03.675 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:03.675 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:03.676 06:07:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:03.676 06:07:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:03.676 06:07:23 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:03.676 06:07:23 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.676 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:03.676 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:03.676 06:07:23 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:03.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.198 Waiting for block devices as requested 00:09:04.198 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.198 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.457 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.457 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.748 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:09.748 06:07:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:09.748 06:07:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:10.006 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:10.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.006 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:10.263 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:10.264 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.264 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.521 06:07:29 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:10.521 06:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66790 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:10.521 06:07:30 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:10.521 06:07:30 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:10.521 06:07:30 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:10.521 06:07:30 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:10.521 06:07:30 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:10.521 06:07:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:10.779 Initializing NVMe Controllers 00:09:10.779 Attaching to 0000:00:10.0 00:09:10.779 Attaching to 0000:00:11.0 00:09:10.779 Attached to 0000:00:10.0 00:09:10.779 Attached to 0000:00:11.0 00:09:10.779 Initialization complete. Starting I/O... 00:09:10.779 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:10.779 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:10.779 00:09:11.710 QEMU NVMe Ctrl (12340 ): 2527 I/Os completed (+2527) 00:09:11.710 QEMU NVMe Ctrl (12341 ): 2514 I/Os completed (+2514) 00:09:11.710 00:09:12.642 QEMU NVMe Ctrl (12340 ): 5571 I/Os completed (+3044) 00:09:12.642 QEMU NVMe Ctrl (12341 ): 5504 I/Os completed (+2990) 00:09:12.642 00:09:14.014 QEMU NVMe Ctrl (12340 ): 8644 I/Os completed (+3073) 00:09:14.014 QEMU NVMe Ctrl (12341 ): 8879 I/Os completed (+3375) 00:09:14.014 00:09:14.947 QEMU NVMe Ctrl (12340 ): 11875 I/Os completed (+3231) 00:09:14.947 QEMU NVMe Ctrl (12341 ): 12147 I/Os completed (+3268) 00:09:14.947 00:09:15.881 QEMU NVMe Ctrl (12340 ): 14918 I/Os completed (+3043) 00:09:15.881 QEMU NVMe Ctrl (12341 ): 15203 I/Os completed (+3056) 00:09:15.881 00:09:16.446 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:16.446 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:16.446 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:16.446 [2024-11-20 06:07:36.027803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:16.446 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:16.446 [2024-11-20 06:07:36.029165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.446 [2024-11-20 06:07:36.029224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.446 [2024-11-20 06:07:36.029243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.446 [2024-11-20 06:07:36.029263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:16.447 [2024-11-20 06:07:36.031174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.031222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.031236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.031250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:16.447 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:16.447 [2024-11-20 06:07:36.050668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:16.447 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:16.447 [2024-11-20 06:07:36.051854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.051977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.052002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.052020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:16.447 [2024-11-20 06:07:36.053687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.053722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.053738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 [2024-11-20 06:07:36.053750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:16.447 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:16.447 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:16.447 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:16.447 EAL: Scan for (pci) bus failed. 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:16.704 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:16.705 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:16.705 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:16.705 Attaching to 0000:00:10.0 00:09:16.705 Attached to 0000:00:10.0 00:09:16.705 QEMU NVMe Ctrl (12340 ): 161 I/Os completed (+161) 00:09:16.705 00:09:16.705 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:16.705 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:16.705 06:07:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:16.705 Attaching to 0000:00:11.0 00:09:16.705 Attached to 0000:00:11.0 00:09:17.638 QEMU NVMe Ctrl (12340 ): 3550 I/Os completed (+3389) 00:09:17.638 QEMU NVMe Ctrl (12341 ): 3255 I/Os completed (+3255) 00:09:17.638 00:09:19.013 QEMU NVMe Ctrl (12340 ): 6699 I/Os completed (+3149) 00:09:19.013 QEMU NVMe Ctrl (12341 ): 6372 I/Os completed (+3117) 00:09:19.013 00:09:19.954 QEMU NVMe Ctrl (12340 ): 9871 I/Os completed (+3172) 00:09:19.954 QEMU NVMe Ctrl (12341 ): 9540 I/Os completed (+3168) 00:09:19.954 00:09:20.893 QEMU NVMe Ctrl (12340 ): 13116 I/Os completed (+3245) 00:09:20.893 QEMU NVMe Ctrl (12341 ): 12926 I/Os completed (+3386) 00:09:20.893 00:09:21.835 QEMU NVMe Ctrl (12340 ): 16148 I/Os completed (+3032) 00:09:21.835 QEMU NVMe Ctrl (12341 ): 16033 I/Os completed (+3107) 00:09:21.835 00:09:22.776 QEMU NVMe Ctrl (12340 ): 19221 I/Os completed (+3073) 00:09:22.776 QEMU NVMe Ctrl (12341 ): 19101 I/Os completed (+3068) 00:09:22.776 00:09:23.728 QEMU NVMe Ctrl (12340 ): 22458 I/Os completed (+3237) 00:09:23.728 QEMU NVMe Ctrl (12341 ): 22367 I/Os completed (+3266) 00:09:23.728 00:09:24.662 QEMU NVMe Ctrl (12340 ): 25461 I/Os completed (+3003) 00:09:24.662 QEMU NVMe Ctrl (12341 ): 25419 I/Os completed (+3052) 00:09:24.662 00:09:26.035 QEMU NVMe Ctrl (12340 ): 28705 I/Os completed (+3244) 00:09:26.035 QEMU NVMe Ctrl (12341 ): 28873 I/Os completed (+3454) 00:09:26.035 00:09:26.969 QEMU NVMe Ctrl (12340 ): 31788 I/Os completed (+3083) 00:09:26.969 QEMU NVMe Ctrl (12341 ): 32119 I/Os completed (+3246) 00:09:26.969 00:09:27.904 QEMU NVMe Ctrl (12340 ): 34815 I/Os completed (+3027) 00:09:27.904 QEMU NVMe Ctrl (12341 ): 35119 I/Os completed (+3000) 00:09:27.904 00:09:28.847 QEMU NVMe Ctrl (12340 ): 38065 I/Os completed (+3250) 00:09:28.847 QEMU NVMe Ctrl (12341 ): 38491 I/Os completed (+3372) 00:09:28.847 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:28.847 [2024-11-20 06:07:48.290116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:28.847 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:28.847 [2024-11-20 06:07:48.291344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.291478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.291531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.291615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:28.847 [2024-11-20 06:07:48.293574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.293685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.293719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.293786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:28.847 [2024-11-20 06:07:48.312226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:28.847 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:28.847 [2024-11-20 06:07:48.313380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.313486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.313579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.313598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:28.847 [2024-11-20 06:07:48.315279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.315395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.315415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 [2024-11-20 06:07:48.315430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:28.847 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:28.847 EAL: Scan for (pci) bus failed. 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:28.847 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:28.847 Attaching to 0000:00:10.0 00:09:28.847 Attached to 0000:00:10.0 00:09:29.105 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:29.105 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:29.105 06:07:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:29.105 Attaching to 0000:00:11.0 00:09:29.105 Attached to 0000:00:11.0 00:09:29.670 QEMU NVMe Ctrl (12340 ): 2523 I/Os completed (+2523) 00:09:29.670 QEMU NVMe Ctrl (12341 ): 2244 I/Os completed (+2244) 00:09:29.670 00:09:31.044 QEMU NVMe Ctrl (12340 ): 5836 I/Os completed (+3313) 00:09:31.044 QEMU NVMe Ctrl (12341 ): 5490 I/Os completed (+3246) 00:09:31.044 00:09:31.977 QEMU NVMe Ctrl (12340 ): 9269 I/Os completed (+3433) 00:09:31.978 QEMU NVMe Ctrl (12341 ): 8962 I/Os completed (+3472) 00:09:31.978 00:09:32.912 QEMU NVMe Ctrl (12340 ): 12727 I/Os completed (+3458) 00:09:32.912 QEMU NVMe Ctrl (12341 ): 12420 I/Os completed (+3458) 00:09:32.912 00:09:33.844 QEMU NVMe Ctrl (12340 ): 16221 I/Os completed (+3494) 00:09:33.844 QEMU NVMe Ctrl (12341 ): 15916 I/Os completed (+3496) 00:09:33.844 00:09:34.775 QEMU NVMe Ctrl (12340 ): 19695 I/Os completed (+3474) 00:09:34.775 QEMU NVMe Ctrl (12341 ): 19374 I/Os completed (+3458) 00:09:34.775 00:09:35.709 QEMU NVMe Ctrl (12340 ): 22658 I/Os completed (+2963) 00:09:35.709 QEMU NVMe Ctrl (12341 ): 22337 I/Os completed (+2963) 00:09:35.709 00:09:36.643 QEMU NVMe Ctrl (12340 ): 25700 I/Os completed (+3042) 00:09:36.643 QEMU NVMe Ctrl (12341 ): 25404 I/Os completed (+3067) 00:09:36.643 00:09:37.621 QEMU NVMe Ctrl (12340 ): 28658 I/Os completed (+2958) 00:09:37.621 QEMU NVMe Ctrl (12341 ): 28504 I/Os completed (+3100) 00:09:37.621 00:09:38.996 QEMU NVMe Ctrl (12340 ): 32053 I/Os completed (+3395) 00:09:38.996 QEMU NVMe Ctrl (12341 ): 32070 I/Os completed (+3566) 00:09:38.996 00:09:39.930 QEMU NVMe Ctrl (12340 ): 35036 I/Os completed (+2983) 00:09:39.930 QEMU NVMe Ctrl (12341 ): 34992 I/Os completed (+2922) 00:09:39.930 00:09:40.867 QEMU NVMe Ctrl (12340 ): 38365 I/Os completed (+3329) 00:09:40.867 QEMU NVMe Ctrl (12341 ): 38306 I/Os completed (+3314) 00:09:40.867 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.154 [2024-11-20 06:08:00.535512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:41.154 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:41.154 [2024-11-20 06:08:00.536573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.536697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.536732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.536786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:41.154 [2024-11-20 06:08:00.538453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.538565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.538594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.538659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.154 [2024-11-20 06:08:00.562455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:41.154 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:41.154 [2024-11-20 06:08:00.563443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.563584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.563616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.563681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:41.154 [2024-11-20 06:08:00.565216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.565303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.565333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 [2024-11-20 06:08:00.565375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:41.154 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:41.154 EAL: Scan for (pci) bus failed. 00:09:41.154 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.155 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:41.155 Attaching to 0000:00:10.0 00:09:41.155 Attached to 0000:00:10.0 00:09:41.412 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:41.412 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.412 06:08:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:41.412 Attaching to 0000:00:11.0 00:09:41.412 Attached to 0000:00:11.0 00:09:41.412 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:41.412 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:41.412 [2024-11-20 06:08:00.832861] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:53.604 06:08:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:53.604 06:08:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:53.604 06:08:12 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.80 00:09:53.604 06:08:12 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.80 00:09:53.604 06:08:12 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:09:53.604 06:08:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.80 00:09:53.604 06:08:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.80 2 00:09:53.604 remove_attach_helper took 42.80s to complete (handling 2 nvme drive(s)) 06:08:12 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66790 00:10:00.157 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66790) - No such process 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66790 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:00.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67334 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67334 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67334 ']' 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.157 06:08:18 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.157 06:08:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:00.157 [2024-11-20 06:08:18.913975] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:10:00.157 [2024-11-20 06:08:18.914277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67334 ] 00:10:00.157 [2024-11-20 06:08:19.069092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.157 [2024-11-20 06:08:19.171061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.157 06:08:19 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.157 06:08:19 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:00.157 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:00.157 06:08:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.157 06:08:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:00.414 06:08:19 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:00.414 06:08:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:07.005 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:07.006 06:08:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.006 06:08:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:07.006 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:07.006 06:08:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.006 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:07.006 06:08:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:07.006 [2024-11-20 06:08:25.885777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:07.006 [2024-11-20 06:08:25.887222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:25.887385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:25.887404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:25.887423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:25.887432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:25.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:25.887448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:25.887460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:25.887467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:25.887480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:25.887487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:25.887510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:07.006 06:08:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.006 06:08:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:07.006 06:08:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:07.006 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:07.006 [2024-11-20 06:08:26.485777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:07.006 [2024-11-20 06:08:26.487293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:26.487330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:26.487343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:26.487357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:26.487366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:26.487374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:26.487383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:26.487389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:26.487397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.006 [2024-11-20 06:08:26.487404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.006 [2024-11-20 06:08:26.487412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.006 [2024-11-20 06:08:26.487419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:07.573 06:08:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.573 06:08:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:07.573 06:08:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:07.573 06:08:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.573 06:08:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.847 [2024-11-20 06:08:39.285983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:19.847 [2024-11-20 06:08:39.287409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.847 [2024-11-20 06:08:39.287521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.847 [2024-11-20 06:08:39.287581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.847 [2024-11-20 06:08:39.287637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.847 [2024-11-20 06:08:39.287656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.847 [2024-11-20 06:08:39.287682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.847 [2024-11-20 06:08:39.287736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.847 [2024-11-20 06:08:39.287756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.847 [2024-11-20 06:08:39.287915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.847 [2024-11-20 06:08:39.287944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.847 [2024-11-20 06:08:39.287978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.847 [2024-11-20 06:08:39.288230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.847 06:08:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:19.847 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:20.413 06:08:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.413 06:08:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:20.413 06:08:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.414 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:20.414 06:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:20.414 [2024-11-20 06:08:39.885994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:20.414 [2024-11-20 06:08:39.887519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.414 [2024-11-20 06:08:39.887552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.414 [2024-11-20 06:08:39.887566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.414 [2024-11-20 06:08:39.887582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.414 [2024-11-20 06:08:39.887591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.414 [2024-11-20 06:08:39.887598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.414 [2024-11-20 06:08:39.887608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.414 [2024-11-20 06:08:39.887615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.414 [2024-11-20 06:08:39.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.414 [2024-11-20 06:08:39.887630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.414 [2024-11-20 06:08:39.887638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.414 [2024-11-20 06:08:39.887645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:20.981 06:08:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.981 06:08:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:20.981 06:08:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.981 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:21.239 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:21.239 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.239 06:08:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.467 [2024-11-20 06:08:52.686206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.467 [2024-11-20 06:08:52.687912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.467 [2024-11-20 06:08:52.688069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.467 [2024-11-20 06:08:52.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.467 [2024-11-20 06:08:52.688648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.467 [2024-11-20 06:08:52.688661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.467 [2024-11-20 06:08:52.688674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.467 [2024-11-20 06:08:52.688682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.467 [2024-11-20 06:08:52.688691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.467 [2024-11-20 06:08:52.688697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.467 [2024-11-20 06:08:52.688707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.467 [2024-11-20 06:08:52.688714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.467 [2024-11-20 06:08:52.688722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 06:08:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:33.467 06:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:33.726 [2024-11-20 06:08:53.186213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:33.726 [2024-11-20 06:08:53.187593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.726 [2024-11-20 06:08:53.187756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.726 [2024-11-20 06:08:53.187773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.726 [2024-11-20 06:08:53.187788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.726 [2024-11-20 06:08:53.187798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.726 [2024-11-20 06:08:53.187805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.726 [2024-11-20 06:08:53.187815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.726 [2024-11-20 06:08:53.187821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.726 [2024-11-20 06:08:53.187831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.726 [2024-11-20 06:08:53.187838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.726 [2024-11-20 06:08:53.187846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.726 [2024-11-20 06:08:53.187853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.726 06:08:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.726 06:08:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.726 06:08:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.726 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.984 06:08:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.75 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.75 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.75 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.75 2 00:10:46.175 remove_attach_helper took 45.75s to complete (handling 2 nvme drive(s)) 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:46.175 06:09:05 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:46.175 06:09:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.727 06:09:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.727 06:09:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.727 06:09:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:52.727 06:09:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:52.727 [2024-11-20 06:09:11.666131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:52.727 [2024-11-20 06:09:11.667513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.727 [2024-11-20 06:09:11.667542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.727 [2024-11-20 06:09:11.667553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.727 [2024-11-20 06:09:11.667571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.727 [2024-11-20 06:09:11.667579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.727 [2024-11-20 06:09:11.667588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.727 [2024-11-20 06:09:11.667595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.727 [2024-11-20 06:09:11.667604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.727 [2024-11-20 06:09:11.667611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.727 [2024-11-20 06:09:11.667619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.728 [2024-11-20 06:09:11.667626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.728 [2024-11-20 06:09:11.667636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.728 [2024-11-20 06:09:12.066136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:52.728 [2024-11-20 06:09:12.068689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.728 [2024-11-20 06:09:12.068830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.728 [2024-11-20 06:09:12.068848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.728 [2024-11-20 06:09:12.068864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.728 [2024-11-20 06:09:12.068873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.728 [2024-11-20 06:09:12.068880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.728 [2024-11-20 06:09:12.068890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.728 [2024-11-20 06:09:12.068897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.728 [2024-11-20 06:09:12.068905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.728 [2024-11-20 06:09:12.068912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.728 [2024-11-20 06:09:12.068920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.728 [2024-11-20 06:09:12.068927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.728 06:09:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.728 06:09:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.728 06:09:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.728 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.985 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.985 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.985 06:09:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.176 06:09:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:05.176 06:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:05.176 [2024-11-20 06:09:24.566353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:05.176 [2024-11-20 06:09:24.567461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.176 [2024-11-20 06:09:24.567508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.176 [2024-11-20 06:09:24.567520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.176 [2024-11-20 06:09:24.567538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.176 [2024-11-20 06:09:24.567545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.176 [2024-11-20 06:09:24.567554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.176 [2024-11-20 06:09:24.567562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.176 [2024-11-20 06:09:24.567570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.176 [2024-11-20 06:09:24.567577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.177 [2024-11-20 06:09:24.567585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.177 [2024-11-20 06:09:24.567592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.177 [2024-11-20 06:09:24.567600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.433 [2024-11-20 06:09:24.966359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.433 [2024-11-20 06:09:24.967639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.433 [2024-11-20 06:09:24.967671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.433 [2024-11-20 06:09:24.967682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.433 [2024-11-20 06:09:24.967696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.433 [2024-11-20 06:09:24.967709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.433 [2024-11-20 06:09:24.967716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.433 [2024-11-20 06:09:24.967726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.433 [2024-11-20 06:09:24.967732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.433 [2024-11-20 06:09:24.967740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.433 [2024-11-20 06:09:24.967747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.433 [2024-11-20 06:09:24.967755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.433 [2024-11-20 06:09:24.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.433 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:05.434 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.434 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.434 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.434 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.434 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.434 06:09:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.434 06:09:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 06:09:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.690 06:09:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.891 06:09:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.891 06:09:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.891 06:09:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.891 [2024-11-20 06:09:37.366567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.891 [2024-11-20 06:09:37.367973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.891 [2024-11-20 06:09:37.368080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.891 [2024-11-20 06:09:37.368144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.891 [2024-11-20 06:09:37.368242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.891 [2024-11-20 06:09:37.368261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.891 [2024-11-20 06:09:37.368288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.891 [2024-11-20 06:09:37.368342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.891 [2024-11-20 06:09:37.368366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.891 [2024-11-20 06:09:37.368390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.891 [2024-11-20 06:09:37.368415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.891 [2024-11-20 06:09:37.368456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.891 [2024-11-20 06:09:37.368481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.891 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.892 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.892 06:09:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.892 06:09:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.892 06:09:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.892 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:17.892 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.457 [2024-11-20 06:09:37.866578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:18.457 [2024-11-20 06:09:37.868031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.457 [2024-11-20 06:09:37.868140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.457 [2024-11-20 06:09:37.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.457 [2024-11-20 06:09:37.868270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.457 [2024-11-20 06:09:37.868292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.457 [2024-11-20 06:09:37.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.457 [2024-11-20 06:09:37.868368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.457 [2024-11-20 06:09:37.868384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.457 [2024-11-20 06:09:37.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.457 [2024-11-20 06:09:37.868613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.457 [2024-11-20 06:09:37.868633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.457 [2024-11-20 06:09:37.868656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.457 06:09:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.457 06:09:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.457 06:09:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:18.457 06:09:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:18.457 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.457 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.457 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.716 06:09:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.69 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.69 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.69 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.69 2 00:11:30.911 remove_attach_helper took 44.69s to complete (handling 2 nvme drive(s)) 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:30.911 06:09:50 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67334 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67334 ']' 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67334 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67334 00:11:30.911 killing process with pid 67334 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67334' 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67334 00:11:30.911 06:09:50 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67334 00:11:32.303 06:09:51 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.874 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:32.874 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:32.874 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.874 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.874 00:11:32.874 real 2m29.958s 00:11:32.874 user 1m51.932s 00:11:32.874 sys 0m16.766s 00:11:32.874 ************************************ 00:11:32.874 END TEST sw_hotplug 00:11:32.874 ************************************ 00:11:32.874 06:09:52 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.874 06:09:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.874 06:09:52 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:32.874 06:09:52 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:32.874 06:09:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:32.874 06:09:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.874 06:09:52 -- common/autotest_common.sh@10 -- # set +x 00:11:32.874 ************************************ 00:11:32.874 START TEST nvme_xnvme 00:11:32.874 ************************************ 00:11:32.874 06:09:52 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:32.874 * Looking for test storage... 00:11:32.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:32.874 06:09:52 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.874 06:09:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.874 06:09:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.132 06:09:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:33.132 06:09:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.133 --rc genhtml_branch_coverage=1 00:11:33.133 --rc genhtml_function_coverage=1 00:11:33.133 --rc genhtml_legend=1 00:11:33.133 --rc geninfo_all_blocks=1 00:11:33.133 --rc geninfo_unexecuted_blocks=1 00:11:33.133 00:11:33.133 ' 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.133 --rc genhtml_branch_coverage=1 00:11:33.133 --rc genhtml_function_coverage=1 00:11:33.133 --rc genhtml_legend=1 00:11:33.133 --rc geninfo_all_blocks=1 00:11:33.133 --rc geninfo_unexecuted_blocks=1 00:11:33.133 00:11:33.133 ' 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.133 --rc genhtml_branch_coverage=1 00:11:33.133 --rc genhtml_function_coverage=1 00:11:33.133 --rc genhtml_legend=1 00:11:33.133 --rc geninfo_all_blocks=1 00:11:33.133 --rc geninfo_unexecuted_blocks=1 00:11:33.133 00:11:33.133 ' 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.133 --rc genhtml_branch_coverage=1 00:11:33.133 --rc genhtml_function_coverage=1 00:11:33.133 --rc genhtml_legend=1 00:11:33.133 --rc geninfo_all_blocks=1 00:11:33.133 --rc geninfo_unexecuted_blocks=1 00:11:33.133 00:11:33.133 ' 00:11:33.133 06:09:52 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.133 06:09:52 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.133 06:09:52 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.133 06:09:52 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.133 06:09:52 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.133 06:09:52 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:33.133 06:09:52 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.133 06:09:52 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.133 06:09:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.133 ************************************ 00:11:33.133 START TEST xnvme_to_malloc_dd_copy 00:11:33.133 ************************************ 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:33.133 06:09:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:33.133 { 00:11:33.133 "subsystems": [ 00:11:33.133 { 00:11:33.133 "subsystem": "bdev", 00:11:33.133 "config": [ 00:11:33.133 { 00:11:33.133 "params": { 00:11:33.133 "block_size": 512, 00:11:33.133 "num_blocks": 2097152, 00:11:33.133 "name": "malloc0" 00:11:33.133 }, 00:11:33.133 "method": "bdev_malloc_create" 00:11:33.133 }, 00:11:33.133 { 00:11:33.133 "params": { 00:11:33.133 "io_mechanism": "libaio", 00:11:33.133 "filename": "/dev/nullb0", 00:11:33.133 "name": "null0" 00:11:33.133 }, 00:11:33.133 "method": "bdev_xnvme_create" 00:11:33.133 }, 00:11:33.133 { 00:11:33.133 "method": "bdev_wait_for_examine" 00:11:33.133 } 00:11:33.133 ] 00:11:33.133 } 00:11:33.133 ] 00:11:33.133 } 00:11:33.133 [2024-11-20 06:09:52.663073] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:33.133 [2024-11-20 06:09:52.663310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68718 ] 00:11:33.391 [2024-11-20 06:09:52.813867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.391 [2024-11-20 06:09:52.949096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.289  [2024-11-20T06:09:56.295Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-20T06:09:57.227Z] Copying: 451/1024 [MB] (223 MBps) [2024-11-20T06:09:58.159Z] Copying: 681/1024 [MB] (230 MBps) [2024-11-20T06:09:58.417Z] Copying: 954/1024 [MB] (272 MBps) [2024-11-20T06:10:00.356Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:11:40.723 00:11:40.723 06:10:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:40.723 06:10:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:40.723 06:10:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:40.723 06:10:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:40.723 { 00:11:40.723 "subsystems": [ 00:11:40.723 { 00:11:40.723 "subsystem": "bdev", 00:11:40.723 "config": [ 00:11:40.723 { 00:11:40.723 "params": { 00:11:40.723 "block_size": 512, 00:11:40.723 "num_blocks": 2097152, 00:11:40.723 "name": "malloc0" 00:11:40.723 }, 00:11:40.723 "method": "bdev_malloc_create" 00:11:40.723 }, 00:11:40.723 { 00:11:40.723 "params": { 00:11:40.723 "io_mechanism": "libaio", 00:11:40.723 "filename": "/dev/nullb0", 00:11:40.723 "name": "null0" 00:11:40.723 }, 00:11:40.723 "method": "bdev_xnvme_create" 00:11:40.723 }, 00:11:40.723 { 00:11:40.723 "method": "bdev_wait_for_examine" 00:11:40.723 } 00:11:40.723 ] 00:11:40.723 } 00:11:40.723 ] 00:11:40.723 } 00:11:40.723 [2024-11-20 06:10:00.269941] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:40.723 [2024-11-20 06:10:00.270054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68805 ] 00:11:40.981 [2024-11-20 06:10:00.425299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.981 [2024-11-20 06:10:00.510245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.880  [2024-11-20T06:10:03.447Z] Copying: 297/1024 [MB] (297 MBps) [2024-11-20T06:10:04.380Z] Copying: 590/1024 [MB] (293 MBps) [2024-11-20T06:10:04.946Z] Copying: 889/1024 [MB] (298 MBps) [2024-11-20T06:10:06.944Z] Copying: 1024/1024 [MB] (average 296 MBps) 00:11:47.311 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:47.311 06:10:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:47.311 { 00:11:47.311 "subsystems": [ 00:11:47.311 { 00:11:47.311 "subsystem": "bdev", 00:11:47.311 "config": [ 00:11:47.311 { 00:11:47.311 "params": { 00:11:47.311 "block_size": 512, 00:11:47.311 "num_blocks": 2097152, 00:11:47.311 "name": "malloc0" 00:11:47.311 }, 00:11:47.311 "method": "bdev_malloc_create" 00:11:47.311 }, 00:11:47.311 { 00:11:47.311 "params": { 00:11:47.311 "io_mechanism": "io_uring", 00:11:47.311 "filename": "/dev/nullb0", 00:11:47.311 "name": "null0" 00:11:47.311 }, 00:11:47.311 "method": "bdev_xnvme_create" 00:11:47.311 }, 00:11:47.311 { 00:11:47.311 "method": "bdev_wait_for_examine" 00:11:47.311 } 00:11:47.311 ] 00:11:47.311 } 00:11:47.311 ] 00:11:47.311 } 00:11:47.311 [2024-11-20 06:10:06.744609] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:47.311 [2024-11-20 06:10:06.744744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68887 ] 00:11:47.311 [2024-11-20 06:10:06.900840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.569 [2024-11-20 06:10:06.984830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.467  [2024-11-20T06:10:10.034Z] Copying: 306/1024 [MB] (306 MBps) [2024-11-20T06:10:10.966Z] Copying: 613/1024 [MB] (306 MBps) [2024-11-20T06:10:11.225Z] Copying: 910/1024 [MB] (297 MBps) [2024-11-20T06:10:13.125Z] Copying: 1024/1024 [MB] (average 302 MBps) 00:11:53.492 00:11:53.749 06:10:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:53.749 06:10:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:53.749 06:10:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:53.749 06:10:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:53.749 { 00:11:53.749 "subsystems": [ 00:11:53.749 { 00:11:53.749 "subsystem": "bdev", 00:11:53.749 "config": [ 00:11:53.749 { 00:11:53.749 "params": { 00:11:53.749 "block_size": 512, 00:11:53.749 "num_blocks": 2097152, 00:11:53.749 "name": "malloc0" 00:11:53.749 }, 00:11:53.749 "method": "bdev_malloc_create" 00:11:53.749 }, 00:11:53.749 { 00:11:53.749 "params": { 00:11:53.749 "io_mechanism": "io_uring", 00:11:53.749 "filename": "/dev/nullb0", 00:11:53.749 "name": "null0" 00:11:53.749 }, 00:11:53.749 "method": "bdev_xnvme_create" 00:11:53.749 }, 00:11:53.749 { 00:11:53.749 "method": "bdev_wait_for_examine" 00:11:53.749 } 00:11:53.749 ] 00:11:53.749 } 00:11:53.749 ] 00:11:53.749 } 00:11:53.749 [2024-11-20 06:10:13.196762] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:11:53.749 [2024-11-20 06:10:13.196888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68965 ] 00:11:53.749 [2024-11-20 06:10:13.358068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.007 [2024-11-20 06:10:13.463318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.919  [2024-11-20T06:10:16.486Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-20T06:10:17.861Z] Copying: 485/1024 [MB] (242 MBps) [2024-11-20T06:10:18.427Z] Copying: 775/1024 [MB] (290 MBps) [2024-11-20T06:10:20.337Z] Copying: 1024/1024 [MB] (average 268 MBps) 00:12:00.704 00:12:00.704 06:10:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:00.704 06:10:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:00.704 00:12:00.704 real 0m27.649s 00:12:00.704 user 0m24.438s 00:12:00.704 sys 0m2.662s 00:12:00.704 06:10:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.704 06:10:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:00.704 ************************************ 00:12:00.704 END TEST xnvme_to_malloc_dd_copy 00:12:00.704 ************************************ 00:12:00.704 06:10:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:00.704 06:10:20 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:00.704 06:10:20 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.704 06:10:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:00.704 ************************************ 00:12:00.704 START TEST xnvme_bdevperf 00:12:00.704 ************************************ 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:00.704 06:10:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:00.704 { 00:12:00.704 "subsystems": [ 00:12:00.704 { 00:12:00.704 "subsystem": "bdev", 00:12:00.704 "config": [ 00:12:00.704 { 00:12:00.704 "params": { 00:12:00.704 "io_mechanism": "libaio", 00:12:00.704 "filename": "/dev/nullb0", 00:12:00.704 "name": "null0" 00:12:00.704 }, 00:12:00.704 "method": "bdev_xnvme_create" 00:12:00.704 }, 00:12:00.704 { 00:12:00.704 "method": "bdev_wait_for_examine" 00:12:00.704 } 00:12:00.704 ] 00:12:00.704 } 00:12:00.704 ] 00:12:00.704 } 00:12:00.963 [2024-11-20 06:10:20.416739] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:00.963 [2024-11-20 06:10:20.416954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69070 ] 00:12:01.221 [2024-11-20 06:10:20.596544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.221 [2024-11-20 06:10:20.683007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.478 Running I/O for 5 seconds... 00:12:03.375 191424.00 IOPS, 747.75 MiB/s [2024-11-20T06:10:23.940Z] 188992.00 IOPS, 738.25 MiB/s [2024-11-20T06:10:25.312Z] 188693.33 IOPS, 737.08 MiB/s [2024-11-20T06:10:26.245Z] 188624.00 IOPS, 736.81 MiB/s 00:12:06.612 Latency(us) 00:12:06.612 [2024-11-20T06:10:26.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.612 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:06.612 null0 : 5.00 188398.84 735.93 0.00 0.00 337.32 115.00 2558.42 00:12:06.612 [2024-11-20T06:10:26.245Z] =================================================================================================================== 00:12:06.612 [2024-11-20T06:10:26.245Z] Total : 188398.84 735.93 0.00 0.00 337.32 115.00 2558.42 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:06.871 06:10:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.129 { 00:12:07.129 "subsystems": [ 00:12:07.129 { 00:12:07.129 "subsystem": "bdev", 00:12:07.129 "config": [ 00:12:07.129 { 00:12:07.129 "params": { 00:12:07.129 "io_mechanism": "io_uring", 00:12:07.129 "filename": "/dev/nullb0", 00:12:07.129 "name": "null0" 00:12:07.129 }, 00:12:07.129 "method": "bdev_xnvme_create" 00:12:07.129 }, 00:12:07.129 { 00:12:07.129 "method": "bdev_wait_for_examine" 00:12:07.129 } 00:12:07.129 ] 00:12:07.129 } 00:12:07.129 ] 00:12:07.129 } 00:12:07.129 [2024-11-20 06:10:26.586472] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:07.129 [2024-11-20 06:10:26.586693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69144 ] 00:12:07.129 [2024-11-20 06:10:26.753964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.388 [2024-11-20 06:10:26.856691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.646 Running I/O for 5 seconds... 00:12:09.619 172672.00 IOPS, 674.50 MiB/s [2024-11-20T06:10:30.191Z] 173920.00 IOPS, 679.38 MiB/s [2024-11-20T06:10:31.129Z] 174400.00 IOPS, 681.25 MiB/s [2024-11-20T06:10:32.498Z] 173232.00 IOPS, 676.69 MiB/s 00:12:12.865 Latency(us) 00:12:12.865 [2024-11-20T06:10:32.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.866 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:12.866 null0 : 5.00 173644.74 678.30 0.00 0.00 365.55 196.14 2016.49 00:12:12.866 [2024-11-20T06:10:32.499Z] =================================================================================================================== 00:12:12.866 [2024-11-20T06:10:32.499Z] Total : 173644.74 678.30 0.00 0.00 365.55 196.14 2016.49 00:12:13.434 06:10:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:13.434 06:10:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:13.434 ************************************ 00:12:13.434 END TEST xnvme_bdevperf 00:12:13.434 ************************************ 00:12:13.434 00:12:13.434 real 0m12.582s 00:12:13.434 user 0m10.113s 00:12:13.434 sys 0m2.210s 00:12:13.434 06:10:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.434 06:10:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 00:12:13.434 real 0m40.472s 00:12:13.434 user 0m34.655s 00:12:13.434 sys 0m4.992s 00:12:13.434 ************************************ 00:12:13.434 END TEST nvme_xnvme 00:12:13.434 ************************************ 00:12:13.434 06:10:32 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.434 06:10:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 06:10:32 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:13.434 06:10:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:13.434 06:10:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.434 06:10:32 -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 ************************************ 00:12:13.434 START TEST blockdev_xnvme 00:12:13.434 ************************************ 00:12:13.434 06:10:32 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:13.434 * Looking for test storage... 00:12:13.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:13.434 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:13.434 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:13.434 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.693 06:10:33 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:13.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.693 --rc genhtml_branch_coverage=1 00:12:13.693 --rc genhtml_function_coverage=1 00:12:13.693 --rc genhtml_legend=1 00:12:13.693 --rc geninfo_all_blocks=1 00:12:13.693 --rc geninfo_unexecuted_blocks=1 00:12:13.693 00:12:13.693 ' 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:13.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.693 --rc genhtml_branch_coverage=1 00:12:13.693 --rc genhtml_function_coverage=1 00:12:13.693 --rc genhtml_legend=1 00:12:13.693 --rc geninfo_all_blocks=1 00:12:13.693 --rc geninfo_unexecuted_blocks=1 00:12:13.693 00:12:13.693 ' 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:13.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.693 --rc genhtml_branch_coverage=1 00:12:13.693 --rc genhtml_function_coverage=1 00:12:13.693 --rc genhtml_legend=1 00:12:13.693 --rc geninfo_all_blocks=1 00:12:13.693 --rc geninfo_unexecuted_blocks=1 00:12:13.693 00:12:13.693 ' 00:12:13.693 06:10:33 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:13.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.693 --rc genhtml_branch_coverage=1 00:12:13.693 --rc genhtml_function_coverage=1 00:12:13.693 --rc genhtml_legend=1 00:12:13.693 --rc geninfo_all_blocks=1 00:12:13.693 --rc geninfo_unexecuted_blocks=1 00:12:13.693 00:12:13.693 ' 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:13.693 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69286 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69286 00:12:13.694 06:10:33 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69286 ']' 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.694 06:10:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.694 [2024-11-20 06:10:33.176434] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:13.694 [2024-11-20 06:10:33.176762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69286 ] 00:12:13.953 [2024-11-20 06:10:33.331351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.953 [2024-11-20 06:10:33.452377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.892 06:10:34 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:14.892 06:10:34 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:14.892 06:10:34 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:14.892 06:10:34 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:14.892 06:10:34 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:14.892 06:10:34 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:14.892 06:10:34 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.152 Waiting for block devices as requested 00:12:15.152 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.152 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.152 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.413 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.705 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:20.705 nvme0n1 00:12:20.705 nvme1n1 00:12:20.705 nvme2n1 00:12:20.705 nvme2n2 00:12:20.705 nvme2n3 00:12:20.705 nvme3n1 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.705 06:10:40 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:20.705 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:20.706 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a16a600b-3c8a-4464-9176-0fb70a7a73aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a16a600b-3c8a-4464-9176-0fb70a7a73aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cdb3e4d6-7307-4daf-aac7-cfbd75f61621"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cdb3e4d6-7307-4daf-aac7-cfbd75f61621",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "439e1b50-ed1a-4fa8-8e11-8816091390ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "439e1b50-ed1a-4fa8-8e11-8816091390ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "0bdddb63-b6a1-4e7f-97c3-9668ae788637"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0bdddb63-b6a1-4e7f-97c3-9668ae788637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "76f986a9-f09e-4b0b-a857-3d189182d887"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76f986a9-f09e-4b0b-a857-3d189182d887",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce14cb71-b9e0-4c0c-b51c-39c83825c0ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ce14cb71-b9e0-4c0c-b51c-39c83825c0ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:20.706 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:20.706 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:20.706 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:20.706 06:10:40 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69286 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69286 ']' 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69286 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69286 00:12:20.706 killing process with pid 69286 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69286' 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69286 00:12:20.706 06:10:40 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69286 00:12:22.619 06:10:41 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:22.619 06:10:41 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:22.619 06:10:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:22.619 06:10:41 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.619 06:10:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.619 ************************************ 00:12:22.619 START TEST bdev_hello_world 00:12:22.619 ************************************ 00:12:22.619 06:10:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:22.619 [2024-11-20 06:10:41.962631] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:22.619 [2024-11-20 06:10:41.962996] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69650 ] 00:12:22.619 [2024-11-20 06:10:42.120877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.880 [2024-11-20 06:10:42.251564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.141 [2024-11-20 06:10:42.666454] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:23.141 [2024-11-20 06:10:42.666872] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:23.141 [2024-11-20 06:10:42.666905] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:23.141 [2024-11-20 06:10:42.669016] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:23.141 [2024-11-20 06:10:42.669429] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:23.141 [2024-11-20 06:10:42.669456] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:23.141 [2024-11-20 06:10:42.670109] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:23.141 00:12:23.141 [2024-11-20 06:10:42.670258] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:24.112 00:12:24.112 real 0m1.570s 00:12:24.112 user 0m1.176s 00:12:24.112 sys 0m0.237s 00:12:24.112 06:10:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.112 ************************************ 00:12:24.112 END TEST bdev_hello_world 00:12:24.112 ************************************ 00:12:24.112 06:10:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 06:10:43 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:24.112 06:10:43 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:24.112 06:10:43 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.112 06:10:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 ************************************ 00:12:24.112 START TEST bdev_bounds 00:12:24.112 ************************************ 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:24.112 Process bdevio pid: 69691 00:12:24.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69691 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69691' 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69691 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69691 ']' 00:12:24.112 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.113 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:24.113 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.113 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:24.113 06:10:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 [2024-11-20 06:10:43.609422] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:24.113 [2024-11-20 06:10:43.609994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69691 ] 00:12:24.373 [2024-11-20 06:10:43.775327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.373 [2024-11-20 06:10:43.940912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.373 [2024-11-20 06:10:43.941156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.373 [2024-11-20 06:10:43.941156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.976 06:10:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:24.976 06:10:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:24.976 06:10:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:24.976 I/O targets: 00:12:24.976 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:24.976 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:24.976 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.976 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.976 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.976 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:24.976 00:12:24.976 00:12:24.976 CUnit - A unit testing framework for C - Version 2.1-3 00:12:24.976 http://cunit.sourceforge.net/ 00:12:24.976 00:12:24.976 00:12:24.976 Suite: bdevio tests on: nvme3n1 00:12:24.976 Test: blockdev write read block ...passed 00:12:24.976 Test: blockdev write zeroes read block ...passed 00:12:24.976 Test: blockdev write zeroes read no split ...passed 00:12:25.238 Test: blockdev write zeroes read split ...passed 00:12:25.238 Test: blockdev write zeroes read split partial ...passed 00:12:25.238 Test: blockdev reset ...passed 00:12:25.238 Test: blockdev write read 8 blocks ...passed 00:12:25.238 Test: blockdev write read size > 128k ...passed 00:12:25.238 Test: blockdev write read invalid size ...passed 00:12:25.238 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.238 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.238 Test: blockdev write read max offset ...passed 00:12:25.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.238 Test: blockdev writev readv 8 blocks ...passed 00:12:25.238 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.238 Test: blockdev writev readv block ...passed 00:12:25.238 Test: blockdev writev readv size > 128k ...passed 00:12:25.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.238 Test: blockdev comparev and writev ...passed 00:12:25.238 Test: blockdev nvme passthru rw ...passed 00:12:25.238 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.238 Test: blockdev nvme admin passthru ...passed 00:12:25.238 Test: blockdev copy ...passed 00:12:25.238 Suite: bdevio tests on: nvme2n3 00:12:25.238 Test: blockdev write read block ...passed 00:12:25.238 Test: blockdev write zeroes read block ...passed 00:12:25.238 Test: blockdev write zeroes read no split ...passed 00:12:25.238 Test: blockdev write zeroes read split ...passed 00:12:25.238 Test: blockdev write zeroes read split partial ...passed 00:12:25.238 Test: blockdev reset ...passed 00:12:25.238 Test: blockdev write read 8 blocks ...passed 00:12:25.238 Test: blockdev write read size > 128k ...passed 00:12:25.238 Test: blockdev write read invalid size ...passed 00:12:25.238 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.238 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.238 Test: blockdev write read max offset ...passed 00:12:25.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.238 Test: blockdev writev readv 8 blocks ...passed 00:12:25.238 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.238 Test: blockdev writev readv block ...passed 00:12:25.238 Test: blockdev writev readv size > 128k ...passed 00:12:25.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.238 Test: blockdev comparev and writev ...passed 00:12:25.238 Test: blockdev nvme passthru rw ...passed 00:12:25.238 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.238 Test: blockdev nvme admin passthru ...passed 00:12:25.238 Test: blockdev copy ...passed 00:12:25.238 Suite: bdevio tests on: nvme2n2 00:12:25.238 Test: blockdev write read block ...passed 00:12:25.238 Test: blockdev write zeroes read block ...passed 00:12:25.238 Test: blockdev write zeroes read no split ...passed 00:12:25.238 Test: blockdev write zeroes read split ...passed 00:12:25.238 Test: blockdev write zeroes read split partial ...passed 00:12:25.239 Test: blockdev reset ...passed 00:12:25.239 Test: blockdev write read 8 blocks ...passed 00:12:25.239 Test: blockdev write read size > 128k ...passed 00:12:25.239 Test: blockdev write read invalid size ...passed 00:12:25.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.239 Test: blockdev write read max offset ...passed 00:12:25.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.239 Test: blockdev writev readv 8 blocks ...passed 00:12:25.239 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.239 Test: blockdev writev readv block ...passed 00:12:25.239 Test: blockdev writev readv size > 128k ...passed 00:12:25.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.239 Test: blockdev comparev and writev ...passed 00:12:25.239 Test: blockdev nvme passthru rw ...passed 00:12:25.239 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.239 Test: blockdev nvme admin passthru ...passed 00:12:25.239 Test: blockdev copy ...passed 00:12:25.239 Suite: bdevio tests on: nvme2n1 00:12:25.239 Test: blockdev write read block ...passed 00:12:25.239 Test: blockdev write zeroes read block ...passed 00:12:25.239 Test: blockdev write zeroes read no split ...passed 00:12:25.239 Test: blockdev write zeroes read split ...passed 00:12:25.502 Test: blockdev write zeroes read split partial ...passed 00:12:25.502 Test: blockdev reset ...passed 00:12:25.502 Test: blockdev write read 8 blocks ...passed 00:12:25.502 Test: blockdev write read size > 128k ...passed 00:12:25.502 Test: blockdev write read invalid size ...passed 00:12:25.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.502 Test: blockdev write read max offset ...passed 00:12:25.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.502 Test: blockdev writev readv 8 blocks ...passed 00:12:25.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.502 Test: blockdev writev readv block ...passed 00:12:25.502 Test: blockdev writev readv size > 128k ...passed 00:12:25.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.502 Test: blockdev comparev and writev ...passed 00:12:25.502 Test: blockdev nvme passthru rw ...passed 00:12:25.502 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.502 Test: blockdev nvme admin passthru ...passed 00:12:25.502 Test: blockdev copy ...passed 00:12:25.502 Suite: bdevio tests on: nvme1n1 00:12:25.502 Test: blockdev write read block ...passed 00:12:25.502 Test: blockdev write zeroes read block ...passed 00:12:25.502 Test: blockdev write zeroes read no split ...passed 00:12:25.502 Test: blockdev write zeroes read split ...passed 00:12:25.502 Test: blockdev write zeroes read split partial ...passed 00:12:25.502 Test: blockdev reset ...passed 00:12:25.502 Test: blockdev write read 8 blocks ...passed 00:12:25.502 Test: blockdev write read size > 128k ...passed 00:12:25.502 Test: blockdev write read invalid size ...passed 00:12:25.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.502 Test: blockdev write read max offset ...passed 00:12:25.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.502 Test: blockdev writev readv 8 blocks ...passed 00:12:25.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.502 Test: blockdev writev readv block ...passed 00:12:25.502 Test: blockdev writev readv size > 128k ...passed 00:12:25.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.502 Test: blockdev comparev and writev ...passed 00:12:25.502 Test: blockdev nvme passthru rw ...passed 00:12:25.502 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.502 Test: blockdev nvme admin passthru ...passed 00:12:25.502 Test: blockdev copy ...passed 00:12:25.502 Suite: bdevio tests on: nvme0n1 00:12:25.502 Test: blockdev write read block ...passed 00:12:25.502 Test: blockdev write zeroes read block ...passed 00:12:25.502 Test: blockdev write zeroes read no split ...passed 00:12:25.502 Test: blockdev write zeroes read split ...passed 00:12:25.502 Test: blockdev write zeroes read split partial ...passed 00:12:25.502 Test: blockdev reset ...passed 00:12:25.502 Test: blockdev write read 8 blocks ...passed 00:12:25.502 Test: blockdev write read size > 128k ...passed 00:12:25.502 Test: blockdev write read invalid size ...passed 00:12:25.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.502 Test: blockdev write read max offset ...passed 00:12:25.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.502 Test: blockdev writev readv 8 blocks ...passed 00:12:25.502 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.502 Test: blockdev writev readv block ...passed 00:12:25.502 Test: blockdev writev readv size > 128k ...passed 00:12:25.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.502 Test: blockdev comparev and writev ...passed 00:12:25.502 Test: blockdev nvme passthru rw ...passed 00:12:25.502 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.502 Test: blockdev nvme admin passthru ...passed 00:12:25.502 Test: blockdev copy ...passed 00:12:25.502 00:12:25.502 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.503 suites 6 6 n/a 0 0 00:12:25.503 tests 138 138 138 0 0 00:12:25.503 asserts 780 780 780 0 n/a 00:12:25.503 00:12:25.503 Elapsed time = 1.280 seconds 00:12:25.503 0 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69691 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69691 ']' 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69691 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:25.503 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69691 00:12:25.765 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:25.765 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:25.765 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69691' 00:12:25.765 killing process with pid 69691 00:12:25.765 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69691 00:12:25.765 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69691 00:12:26.709 06:10:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:26.709 00:12:26.709 real 0m2.439s 00:12:26.709 user 0m5.774s 00:12:26.709 sys 0m0.411s 00:12:26.709 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.709 ************************************ 00:12:26.709 END TEST bdev_bounds 00:12:26.709 ************************************ 00:12:26.709 06:10:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:26.709 06:10:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:26.709 06:10:46 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:26.709 06:10:46 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.709 06:10:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.709 ************************************ 00:12:26.709 START TEST bdev_nbd 00:12:26.709 ************************************ 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:26.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69747 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69747 /var/tmp/spdk-nbd.sock 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69747 ']' 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.709 06:10:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:26.709 [2024-11-20 06:10:46.131599] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:26.709 [2024-11-20 06:10:46.131999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.709 [2024-11-20 06:10:46.298651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.969 [2024-11-20 06:10:46.436113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:27.535 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.793 1+0 records in 00:12:27.793 1+0 records out 00:12:27.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642504 s, 6.4 MB/s 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.793 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:27.794 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:27.794 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:27.794 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:27.794 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.053 1+0 records in 00:12:28.053 1+0 records out 00:12:28.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00147036 s, 2.8 MB/s 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.053 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.313 1+0 records in 00:12:28.313 1+0 records out 00:12:28.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118219 s, 3.5 MB/s 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:28.313 06:10:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:28.314 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.314 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.314 06:10:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.583 1+0 records in 00:12:28.583 1+0 records out 00:12:28.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011658 s, 3.5 MB/s 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.583 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.876 1+0 records in 00:12:28.876 1+0 records out 00:12:28.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133096 s, 3.1 MB/s 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.876 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.877 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.137 1+0 records in 00:12:29.137 1+0 records out 00:12:29.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100536 s, 4.1 MB/s 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:29.137 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd0", 00:12:29.398 "bdev_name": "nvme0n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd1", 00:12:29.398 "bdev_name": "nvme1n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd2", 00:12:29.398 "bdev_name": "nvme2n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd3", 00:12:29.398 "bdev_name": "nvme2n2" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd4", 00:12:29.398 "bdev_name": "nvme2n3" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd5", 00:12:29.398 "bdev_name": "nvme3n1" 00:12:29.398 } 00:12:29.398 ]' 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd0", 00:12:29.398 "bdev_name": "nvme0n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd1", 00:12:29.398 "bdev_name": "nvme1n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd2", 00:12:29.398 "bdev_name": "nvme2n1" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd3", 00:12:29.398 "bdev_name": "nvme2n2" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd4", 00:12:29.398 "bdev_name": "nvme2n3" 00:12:29.398 }, 00:12:29.398 { 00:12:29.398 "nbd_device": "/dev/nbd5", 00:12:29.398 "bdev_name": "nvme3n1" 00:12:29.398 } 00:12:29.398 ]' 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.398 06:10:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.658 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.920 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.182 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.444 06:10:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.707 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.968 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:31.229 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.230 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:31.230 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.230 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.230 06:10:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:31.491 /dev/nbd0 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.491 1+0 records in 00:12:31.491 1+0 records out 00:12:31.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130413 s, 3.1 MB/s 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.491 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:31.758 /dev/nbd1 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.758 1+0 records in 00:12:31.758 1+0 records out 00:12:31.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134112 s, 3.1 MB/s 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.758 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:32.071 /dev/nbd10 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.071 1+0 records in 00:12:32.071 1+0 records out 00:12:32.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833833 s, 4.9 MB/s 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.071 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:32.362 /dev/nbd11 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.362 1+0 records in 00:12:32.362 1+0 records out 00:12:32.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123967 s, 3.3 MB/s 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:32.362 /dev/nbd12 00:12:32.362 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:32.623 06:10:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.623 1+0 records in 00:12:32.623 1+0 records out 00:12:32.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109212 s, 3.8 MB/s 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.623 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:32.623 /dev/nbd13 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.885 1+0 records in 00:12:32.885 1+0 records out 00:12:32.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134647 s, 3.0 MB/s 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd0", 00:12:32.885 "bdev_name": "nvme0n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd1", 00:12:32.885 "bdev_name": "nvme1n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd10", 00:12:32.885 "bdev_name": "nvme2n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd11", 00:12:32.885 "bdev_name": "nvme2n2" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd12", 00:12:32.885 "bdev_name": "nvme2n3" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd13", 00:12:32.885 "bdev_name": "nvme3n1" 00:12:32.885 } 00:12:32.885 ]' 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd0", 00:12:32.885 "bdev_name": "nvme0n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd1", 00:12:32.885 "bdev_name": "nvme1n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd10", 00:12:32.885 "bdev_name": "nvme2n1" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd11", 00:12:32.885 "bdev_name": "nvme2n2" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd12", 00:12:32.885 "bdev_name": "nvme2n3" 00:12:32.885 }, 00:12:32.885 { 00:12:32.885 "nbd_device": "/dev/nbd13", 00:12:32.885 "bdev_name": "nvme3n1" 00:12:32.885 } 00:12:32.885 ]' 00:12:32.885 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:33.147 /dev/nbd1 00:12:33.147 /dev/nbd10 00:12:33.147 /dev/nbd11 00:12:33.147 /dev/nbd12 00:12:33.147 /dev/nbd13' 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:33.147 /dev/nbd1 00:12:33.147 /dev/nbd10 00:12:33.147 /dev/nbd11 00:12:33.147 /dev/nbd12 00:12:33.147 /dev/nbd13' 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:33.147 256+0 records in 00:12:33.147 256+0 records out 00:12:33.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0081931 s, 128 MB/s 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.147 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:33.409 256+0 records in 00:12:33.409 256+0 records out 00:12:33.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232462 s, 4.5 MB/s 00:12:33.409 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.409 06:10:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:33.670 256+0 records in 00:12:33.670 256+0 records out 00:12:33.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.344385 s, 3.0 MB/s 00:12:33.670 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.670 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:33.932 256+0 records in 00:12:33.932 256+0 records out 00:12:33.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258573 s, 4.1 MB/s 00:12:33.932 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.932 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:34.193 256+0 records in 00:12:34.193 256+0 records out 00:12:34.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236779 s, 4.4 MB/s 00:12:34.193 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.193 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:34.193 256+0 records in 00:12:34.193 256+0 records out 00:12:34.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123275 s, 8.5 MB/s 00:12:34.193 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.193 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:34.453 256+0 records in 00:12:34.453 256+0 records out 00:12:34.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180427 s, 5.8 MB/s 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:34.453 06:10:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.453 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.714 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.975 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:35.236 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.502 06:10:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.502 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.503 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.761 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:36.021 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:36.281 malloc_lvol_verify 00:12:36.281 06:10:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:36.539 868489ea-2bcb-40b7-acad-277dae5f09f5 00:12:36.539 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:36.798 f9406cb8-1449-4c1b-8145-4c1bd5dec38d 00:12:36.798 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:37.057 /dev/nbd0 00:12:37.057 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:37.058 mke2fs 1.47.0 (5-Feb-2023) 00:12:37.058 Discarding device blocks: 0/4096 done 00:12:37.058 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:37.058 00:12:37.058 Allocating group tables: 0/1 done 00:12:37.058 Writing inode tables: 0/1 done 00:12:37.058 Creating journal (1024 blocks): done 00:12:37.058 Writing superblocks and filesystem accounting information: 0/1 done 00:12:37.058 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.058 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69747 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69747 ']' 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69747 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69747 00:12:37.317 killing process with pid 69747 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69747' 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69747 00:12:37.317 06:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69747 00:12:37.886 06:10:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:37.886 00:12:37.886 real 0m11.450s 00:12:37.886 user 0m15.502s 00:12:37.886 sys 0m3.934s 00:12:37.886 06:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.886 06:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:37.886 ************************************ 00:12:37.886 END TEST bdev_nbd 00:12:37.886 ************************************ 00:12:38.147 06:10:57 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:38.147 06:10:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:38.147 06:10:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:38.147 06:10:57 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:38.147 06:10:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:38.147 06:10:57 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.147 06:10:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.147 ************************************ 00:12:38.147 START TEST bdev_fio 00:12:38.147 ************************************ 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:12:38.147 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:38.147 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:38.148 ************************************ 00:12:38.148 START TEST bdev_fio_rw_verify 00:12:38.148 ************************************ 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:38.148 06:10:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.408 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.408 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.408 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.408 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.409 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.409 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.409 fio-3.35 00:12:38.409 Starting 6 threads 00:12:50.724 00:12:50.724 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70160: Wed Nov 20 06:11:08 2024 00:12:50.724 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(475MiB/10001msec) 00:12:50.724 slat (usec): min=2, max=2704, avg= 5.90, stdev=16.89 00:12:50.724 clat (usec): min=74, max=403544, avg=1501.03, stdev=2301.33 00:12:50.724 lat (usec): min=78, max=403548, avg=1506.93, stdev=2301.51 00:12:50.724 clat percentiles (usec): 00:12:50.724 | 50.000th=[ 1352], 99.000th=[ 4178], 99.900th=[ 5735], 00:12:50.724 | 99.990th=[ 7373], 99.999th=[304088] 00:12:50.724 write: IOPS=12.5k, BW=48.7MiB/s (51.1MB/s)(487MiB/10001msec); 0 zone resets 00:12:50.724 slat (usec): min=12, max=6409, avg=44.37, stdev=163.59 00:12:50.724 clat (usec): min=92, max=456384, avg=2026.35, stdev=9587.45 00:12:50.724 lat (usec): min=106, max=456408, avg=2070.72, stdev=9588.90 00:12:50.724 clat percentiles (usec): 00:12:50.724 | 50.000th=[ 1565], 99.000th=[ 4686], 99.900th=[128451], 00:12:50.724 | 99.990th=[404751], 99.999th=[455082] 00:12:50.724 bw ( KiB/s): min=20537, max=63353, per=99.75%, avg=49788.63, stdev=1784.97, samples=114 00:12:50.724 iops : min= 5134, max=15837, avg=12446.53, stdev=446.26, samples=114 00:12:50.724 lat (usec) : 100=0.01%, 250=2.05%, 500=5.60%, 750=8.03%, 1000=10.77% 00:12:50.724 lat (msec) : 2=45.90%, 4=25.72%, 10=1.83%, 20=0.01%, 50=0.01% 00:12:50.724 lat (msec) : 250=0.02%, 500=0.04% 00:12:50.724 cpu : usr=48.80%, sys=28.52%, ctx=4857, majf=0, minf=13128 00:12:50.724 IO depths : 1=11.7%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.724 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.724 issued rwts: total=121577,124793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.724 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:50.724 00:12:50.724 Run status group 0 (all jobs): 00:12:50.724 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=475MiB (498MB), run=10001-10001msec 00:12:50.724 WRITE: bw=48.7MiB/s (51.1MB/s), 48.7MiB/s-48.7MiB/s (51.1MB/s-51.1MB/s), io=487MiB (511MB), run=10001-10001msec 00:12:50.724 ----------------------------------------------------- 00:12:50.724 Suppressions used: 00:12:50.724 count bytes template 00:12:50.724 6 48 /usr/src/fio/parse.c 00:12:50.724 3139 301344 /usr/src/fio/iolog.c 00:12:50.724 1 8 libtcmalloc_minimal.so 00:12:50.724 1 904 libcrypto.so 00:12:50.724 ----------------------------------------------------- 00:12:50.724 00:12:50.724 00:12:50.724 real 0m11.922s 00:12:50.724 user 0m30.775s 00:12:50.724 sys 0m17.402s 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.724 ************************************ 00:12:50.724 END TEST bdev_fio_rw_verify 00:12:50.724 ************************************ 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:12:50.724 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a16a600b-3c8a-4464-9176-0fb70a7a73aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a16a600b-3c8a-4464-9176-0fb70a7a73aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cdb3e4d6-7307-4daf-aac7-cfbd75f61621"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cdb3e4d6-7307-4daf-aac7-cfbd75f61621",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "439e1b50-ed1a-4fa8-8e11-8816091390ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "439e1b50-ed1a-4fa8-8e11-8816091390ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "0bdddb63-b6a1-4e7f-97c3-9668ae788637"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0bdddb63-b6a1-4e7f-97c3-9668ae788637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "76f986a9-f09e-4b0b-a857-3d189182d887"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76f986a9-f09e-4b0b-a857-3d189182d887",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce14cb71-b9e0-4c0c-b51c-39c83825c0ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ce14cb71-b9e0-4c0c-b51c-39c83825c0ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:50.725 /home/vagrant/spdk_repo/spdk 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:50.725 00:12:50.725 real 0m12.101s 00:12:50.725 user 0m30.851s 00:12:50.725 sys 0m17.471s 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.725 06:11:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 ************************************ 00:12:50.725 END TEST bdev_fio 00:12:50.725 ************************************ 00:12:50.725 06:11:09 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:50.725 06:11:09 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.725 06:11:09 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:50.725 06:11:09 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.725 06:11:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 ************************************ 00:12:50.725 START TEST bdev_verify 00:12:50.725 ************************************ 00:12:50.725 06:11:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.725 [2024-11-20 06:11:09.796910] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:50.725 [2024-11-20 06:11:09.797018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70332 ] 00:12:50.725 [2024-11-20 06:11:09.950527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.725 [2024-11-20 06:11:10.055649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.725 [2024-11-20 06:11:10.055890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.987 Running I/O for 5 seconds... 00:12:53.337 21600.00 IOPS, 84.38 MiB/s [2024-11-20T06:11:13.912Z] 21984.00 IOPS, 85.88 MiB/s [2024-11-20T06:11:14.856Z] 22442.67 IOPS, 87.67 MiB/s [2024-11-20T06:11:15.874Z] 22640.75 IOPS, 88.44 MiB/s [2024-11-20T06:11:15.874Z] 23021.40 IOPS, 89.93 MiB/s 00:12:56.241 Latency(us) 00:12:56.241 [2024-11-20T06:11:15.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.241 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0xa0000 00:12:56.241 nvme0n1 : 5.05 1877.14 7.33 0.00 0.00 68043.15 8570.09 75013.51 00:12:56.241 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0xa0000 length 0xa0000 00:12:56.241 nvme0n1 : 5.03 1755.50 6.86 0.00 0.00 72768.20 9628.75 75416.81 00:12:56.241 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0xbd0bd 00:12:56.241 nvme1n1 : 5.06 2277.12 8.89 0.00 0.00 55875.64 5772.21 71787.13 00:12:56.241 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:56.241 nvme1n1 : 5.06 2209.10 8.63 0.00 0.00 57689.23 5847.83 71383.83 00:12:56.241 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0x80000 00:12:56.241 nvme2n1 : 5.06 1921.47 7.51 0.00 0.00 66084.49 6654.42 62914.56 00:12:56.241 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x80000 length 0x80000 00:12:56.241 nvme2n1 : 5.06 1797.06 7.02 0.00 0.00 70594.11 10384.94 67754.14 00:12:56.241 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0x80000 00:12:56.241 nvme2n2 : 5.06 1920.92 7.50 0.00 0.00 65944.31 7864.32 66140.95 00:12:56.241 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x80000 length 0x80000 00:12:56.241 nvme2n2 : 5.07 1791.30 7.00 0.00 0.00 70666.49 6805.66 60494.77 00:12:56.241 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0x80000 00:12:56.241 nvme2n3 : 5.07 1895.12 7.40 0.00 0.00 66699.93 7561.85 62914.56 00:12:56.241 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x80000 length 0x80000 00:12:56.241 nvme2n3 : 5.08 1789.35 6.99 0.00 0.00 70595.73 3932.16 68560.74 00:12:56.241 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x0 length 0x20000 00:12:56.241 nvme3n1 : 5.07 1917.91 7.49 0.00 0.00 65764.78 4940.41 69367.34 00:12:56.241 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.241 Verification LBA range: start 0x20000 length 0x20000 00:12:56.241 nvme3n1 : 5.08 1787.82 6.98 0.00 0.00 70526.16 5217.67 68560.74 00:12:56.241 [2024-11-20T06:11:15.874Z] =================================================================================================================== 00:12:56.241 [2024-11-20T06:11:15.874Z] Total : 22939.81 89.61 0.00 0.00 66356.96 3932.16 75416.81 00:12:56.811 00:12:56.811 real 0m6.548s 00:12:56.811 user 0m10.940s 00:12:56.811 sys 0m1.171s 00:12:56.811 06:11:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.811 ************************************ 00:12:56.811 END TEST bdev_verify 00:12:56.811 ************************************ 00:12:56.811 06:11:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:56.811 06:11:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:56.811 06:11:16 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:56.811 06:11:16 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:56.811 06:11:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.811 ************************************ 00:12:56.811 START TEST bdev_verify_big_io 00:12:56.811 ************************************ 00:12:56.811 06:11:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:56.811 [2024-11-20 06:11:16.403675] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:12:56.811 [2024-11-20 06:11:16.403788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70437 ] 00:12:57.072 [2024-11-20 06:11:16.572712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.072 [2024-11-20 06:11:16.680945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.072 [2024-11-20 06:11:16.681183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.644 Running I/O for 5 seconds... 00:13:03.762 1104.00 IOPS, 69.00 MiB/s [2024-11-20T06:11:23.395Z] 2688.00 IOPS, 168.00 MiB/s [2024-11-20T06:11:23.395Z] 3030.67 IOPS, 189.42 MiB/s 00:13:03.762 Latency(us) 00:13:03.762 [2024-11-20T06:11:23.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.762 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0xa000 00:13:03.762 nvme0n1 : 6.00 110.59 6.91 0.00 0.00 1103736.18 112116.97 1135688.47 00:13:03.762 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0xa000 length 0xa000 00:13:03.762 nvme0n1 : 5.77 133.05 8.32 0.00 0.00 922083.64 53235.40 1180857.90 00:13:03.762 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0xbd0b 00:13:03.762 nvme1n1 : 5.59 160.26 10.02 0.00 0.00 745882.16 10284.11 948557.98 00:13:03.762 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:03.762 nvme1n1 : 5.69 109.68 6.86 0.00 0.00 1080945.23 158899.59 1703532.70 00:13:03.762 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0x8000 00:13:03.762 nvme2n1 : 5.85 139.46 8.72 0.00 0.00 827193.53 8519.68 1400252.26 00:13:03.762 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x8000 length 0x8000 00:13:03.762 nvme2n1 : 5.85 109.34 6.83 0.00 0.00 1050213.47 3352.42 1187310.67 00:13:03.762 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0x8000 00:13:03.762 nvme2n2 : 5.93 129.47 8.09 0.00 0.00 857194.08 108890.58 1400252.26 00:13:03.762 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x8000 length 0x8000 00:13:03.762 nvme2n2 : 5.86 117.41 7.34 0.00 0.00 947020.41 83079.48 1897115.96 00:13:03.762 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0x8000 00:13:03.762 nvme2n3 : 6.01 106.52 6.66 0.00 0.00 1019649.42 70173.93 3110237.74 00:13:03.762 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x8000 length 0x8000 00:13:03.762 nvme2n3 : 6.05 124.31 7.77 0.00 0.00 863639.52 58478.28 1987454.82 00:13:03.762 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x0 length 0x2000 00:13:03.762 nvme3n1 : 6.02 148.95 9.31 0.00 0.00 708801.52 1739.22 1238932.87 00:13:03.762 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:03.762 Verification LBA range: start 0x2000 length 0x2000 00:13:03.762 nvme3n1 : 6.06 179.59 11.22 0.00 0.00 580242.22 1701.42 793691.37 00:13:03.762 [2024-11-20T06:11:23.395Z] =================================================================================================================== 00:13:03.762 [2024-11-20T06:11:23.395Z] Total : 1568.63 98.04 0.00 0.00 866805.75 1701.42 3110237.74 00:13:04.761 00:13:04.761 real 0m7.810s 00:13:04.761 user 0m14.451s 00:13:04.761 sys 0m0.349s 00:13:04.761 ************************************ 00:13:04.761 06:11:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.761 06:11:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.761 END TEST bdev_verify_big_io 00:13:04.761 ************************************ 00:13:04.761 06:11:24 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.761 06:11:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:04.761 06:11:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.761 06:11:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.761 ************************************ 00:13:04.761 START TEST bdev_write_zeroes 00:13:04.761 ************************************ 00:13:04.761 06:11:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.761 [2024-11-20 06:11:24.280063] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:04.761 [2024-11-20 06:11:24.280183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:13:05.023 [2024-11-20 06:11:24.435147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.023 [2024-11-20 06:11:24.537803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.361 Running I/O for 1 seconds... 00:13:06.745 60544.00 IOPS, 236.50 MiB/s 00:13:06.745 Latency(us) 00:13:06.745 [2024-11-20T06:11:26.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.745 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme0n1 : 1.03 9534.61 37.24 0.00 0.00 13409.94 8418.86 29844.09 00:13:06.745 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme1n1 : 1.04 11978.59 46.79 0.00 0.00 10666.56 5923.45 26214.40 00:13:06.745 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme2n1 : 1.03 9729.54 38.01 0.00 0.00 13040.65 5545.35 26214.40 00:13:06.745 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme2n2 : 1.04 9621.12 37.58 0.00 0.00 13183.37 7461.02 26214.40 00:13:06.745 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme2n3 : 1.03 9569.42 37.38 0.00 0.00 13239.80 6175.51 28835.84 00:13:06.745 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.745 nvme3n1 : 1.04 9487.03 37.06 0.00 0.00 13352.52 8418.86 27424.30 00:13:06.745 [2024-11-20T06:11:26.378Z] =================================================================================================================== 00:13:06.745 [2024-11-20T06:11:26.378Z] Total : 59920.31 234.06 0.00 0.00 12727.90 5545.35 29844.09 00:13:07.315 00:13:07.315 real 0m2.478s 00:13:07.315 user 0m1.850s 00:13:07.315 sys 0m0.433s 00:13:07.315 06:11:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.315 06:11:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 ************************************ 00:13:07.315 END TEST bdev_write_zeroes 00:13:07.315 ************************************ 00:13:07.315 06:11:26 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.315 06:11:26 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:07.315 06:11:26 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.315 06:11:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 ************************************ 00:13:07.315 START TEST bdev_json_nonenclosed 00:13:07.315 ************************************ 00:13:07.315 06:11:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.315 [2024-11-20 06:11:26.826995] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:07.315 [2024-11-20 06:11:26.827111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:13:07.576 [2024-11-20 06:11:26.989020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.576 [2024-11-20 06:11:27.091294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.576 [2024-11-20 06:11:27.091376] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:07.576 [2024-11-20 06:11:27.091393] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:07.576 [2024-11-20 06:11:27.091402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:07.837 00:13:07.837 real 0m0.510s 00:13:07.837 user 0m0.310s 00:13:07.837 sys 0m0.095s 00:13:07.837 06:11:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.837 ************************************ 00:13:07.837 END TEST bdev_json_nonenclosed 00:13:07.837 06:11:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:07.837 ************************************ 00:13:07.837 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.837 06:11:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:07.837 06:11:27 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.837 06:11:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.837 ************************************ 00:13:07.837 START TEST bdev_json_nonarray 00:13:07.837 ************************************ 00:13:07.837 06:11:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.837 [2024-11-20 06:11:27.391633] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:07.837 [2024-11-20 06:11:27.391740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:13:08.100 [2024-11-20 06:11:27.545191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.100 [2024-11-20 06:11:27.648235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.101 [2024-11-20 06:11:27.648320] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:08.101 [2024-11-20 06:11:27.648337] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:08.101 [2024-11-20 06:11:27.648346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:08.363 00:13:08.363 real 0m0.492s 00:13:08.363 user 0m0.310s 00:13:08.363 sys 0m0.077s 00:13:08.363 06:11:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.363 06:11:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:08.363 ************************************ 00:13:08.363 END TEST bdev_json_nonarray 00:13:08.363 ************************************ 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:08.363 06:11:27 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:08.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.169 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.083 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.083 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.083 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.083 00:13:43.083 real 1m29.320s 00:13:43.083 user 1m33.097s 00:13:43.083 sys 1m50.554s 00:13:43.083 06:12:02 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:43.083 06:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.083 ************************************ 00:13:43.083 END TEST blockdev_xnvme 00:13:43.083 ************************************ 00:13:43.083 06:12:02 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:43.083 06:12:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:43.083 06:12:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:43.083 06:12:02 -- common/autotest_common.sh@10 -- # set +x 00:13:43.083 ************************************ 00:13:43.083 START TEST ublk 00:13:43.083 ************************************ 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:43.083 * Looking for test storage... 00:13:43.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.083 06:12:02 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.083 06:12:02 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.083 06:12:02 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.083 06:12:02 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.083 06:12:02 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.083 06:12:02 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:43.083 06:12:02 ublk -- scripts/common.sh@345 -- # : 1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.083 06:12:02 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.083 06:12:02 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@353 -- # local d=1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.083 06:12:02 ublk -- scripts/common.sh@355 -- # echo 1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.083 06:12:02 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@353 -- # local d=2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.083 06:12:02 ublk -- scripts/common.sh@355 -- # echo 2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.083 06:12:02 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.083 06:12:02 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.083 06:12:02 ublk -- scripts/common.sh@368 -- # return 0 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:43.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.083 --rc genhtml_branch_coverage=1 00:13:43.083 --rc genhtml_function_coverage=1 00:13:43.083 --rc genhtml_legend=1 00:13:43.083 --rc geninfo_all_blocks=1 00:13:43.083 --rc geninfo_unexecuted_blocks=1 00:13:43.083 00:13:43.083 ' 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:43.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.083 --rc genhtml_branch_coverage=1 00:13:43.083 --rc genhtml_function_coverage=1 00:13:43.083 --rc genhtml_legend=1 00:13:43.083 --rc geninfo_all_blocks=1 00:13:43.083 --rc geninfo_unexecuted_blocks=1 00:13:43.083 00:13:43.083 ' 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:43.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.083 --rc genhtml_branch_coverage=1 00:13:43.083 --rc genhtml_function_coverage=1 00:13:43.083 --rc genhtml_legend=1 00:13:43.083 --rc geninfo_all_blocks=1 00:13:43.083 --rc geninfo_unexecuted_blocks=1 00:13:43.083 00:13:43.083 ' 00:13:43.083 06:12:02 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:43.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.083 --rc genhtml_branch_coverage=1 00:13:43.083 --rc genhtml_function_coverage=1 00:13:43.083 --rc genhtml_legend=1 00:13:43.083 --rc geninfo_all_blocks=1 00:13:43.084 --rc geninfo_unexecuted_blocks=1 00:13:43.084 00:13:43.084 ' 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:43.084 06:12:02 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:43.084 06:12:02 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:43.084 06:12:02 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:43.084 06:12:02 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:43.084 06:12:02 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:43.084 06:12:02 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:43.084 06:12:02 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:43.084 06:12:02 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:43.084 06:12:02 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:43.084 06:12:02 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:43.084 06:12:02 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:43.084 06:12:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 ************************************ 00:13:43.084 START TEST test_save_ublk_config 00:13:43.084 ************************************ 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70932 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70932 00:13:43.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70932 ']' 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.084 06:12:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 [2024-11-20 06:12:02.601944] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:43.084 [2024-11-20 06:12:02.602062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:13:43.344 [2024-11-20 06:12:02.758616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.344 [2024-11-20 06:12:02.859913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.913 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.913 [2024-11-20 06:12:03.466513] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:43.913 [2024-11-20 06:12:03.467343] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:43.913 malloc0 00:13:43.913 [2024-11-20 06:12:03.530634] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:43.913 [2024-11-20 06:12:03.530725] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:43.913 [2024-11-20 06:12:03.530735] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.914 [2024-11-20 06:12:03.530742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.914 [2024-11-20 06:12:03.539588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.914 [2024-11-20 06:12:03.539613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.914 [2024-11-20 06:12:03.542171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.914 [2024-11-20 06:12:03.542291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.175 [2024-11-20 06:12:03.549399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.175 0 00:13:44.175 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.175 06:12:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:44.175 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.175 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:44.437 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.437 06:12:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:44.437 "subsystems": [ 00:13:44.437 { 00:13:44.437 "subsystem": "fsdev", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "fsdev_set_opts", 00:13:44.437 "params": { 00:13:44.437 "fsdev_io_pool_size": 65535, 00:13:44.437 "fsdev_io_cache_size": 256 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "keyring", 00:13:44.437 "config": [] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "iobuf", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "iobuf_set_options", 00:13:44.437 "params": { 00:13:44.437 "small_pool_count": 8192, 00:13:44.437 "large_pool_count": 1024, 00:13:44.437 "small_bufsize": 8192, 00:13:44.437 "large_bufsize": 135168, 00:13:44.437 "enable_numa": false 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "sock", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "sock_set_default_impl", 00:13:44.437 "params": { 00:13:44.437 "impl_name": "posix" 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "sock_impl_set_options", 00:13:44.437 "params": { 00:13:44.437 "impl_name": "ssl", 00:13:44.437 "recv_buf_size": 4096, 00:13:44.437 "send_buf_size": 4096, 00:13:44.437 "enable_recv_pipe": true, 00:13:44.437 "enable_quickack": false, 00:13:44.437 "enable_placement_id": 0, 00:13:44.437 "enable_zerocopy_send_server": true, 00:13:44.437 "enable_zerocopy_send_client": false, 00:13:44.437 "zerocopy_threshold": 0, 00:13:44.437 "tls_version": 0, 00:13:44.437 "enable_ktls": false 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "sock_impl_set_options", 00:13:44.437 "params": { 00:13:44.437 "impl_name": "posix", 00:13:44.437 "recv_buf_size": 2097152, 00:13:44.437 "send_buf_size": 2097152, 00:13:44.437 "enable_recv_pipe": true, 00:13:44.437 "enable_quickack": false, 00:13:44.437 "enable_placement_id": 0, 00:13:44.437 "enable_zerocopy_send_server": true, 00:13:44.437 "enable_zerocopy_send_client": false, 00:13:44.437 "zerocopy_threshold": 0, 00:13:44.437 "tls_version": 0, 00:13:44.437 "enable_ktls": false 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "vmd", 00:13:44.437 "config": [] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "accel", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "accel_set_options", 00:13:44.437 "params": { 00:13:44.437 "small_cache_size": 128, 00:13:44.437 "large_cache_size": 16, 00:13:44.437 "task_count": 2048, 00:13:44.437 "sequence_count": 2048, 00:13:44.437 "buf_count": 2048 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "bdev", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "bdev_set_options", 00:13:44.437 "params": { 00:13:44.437 "bdev_io_pool_size": 65535, 00:13:44.437 "bdev_io_cache_size": 256, 00:13:44.437 "bdev_auto_examine": true, 00:13:44.437 "iobuf_small_cache_size": 128, 00:13:44.437 "iobuf_large_cache_size": 16 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_raid_set_options", 00:13:44.437 "params": { 00:13:44.437 "process_window_size_kb": 1024, 00:13:44.437 "process_max_bandwidth_mb_sec": 0 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_iscsi_set_options", 00:13:44.437 "params": { 00:13:44.437 "timeout_sec": 30 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_nvme_set_options", 00:13:44.437 "params": { 00:13:44.437 "action_on_timeout": "none", 00:13:44.437 "timeout_us": 0, 00:13:44.437 "timeout_admin_us": 0, 00:13:44.437 "keep_alive_timeout_ms": 10000, 00:13:44.437 "arbitration_burst": 0, 00:13:44.437 "low_priority_weight": 0, 00:13:44.437 "medium_priority_weight": 0, 00:13:44.437 "high_priority_weight": 0, 00:13:44.437 "nvme_adminq_poll_period_us": 10000, 00:13:44.437 "nvme_ioq_poll_period_us": 0, 00:13:44.437 "io_queue_requests": 0, 00:13:44.437 "delay_cmd_submit": true, 00:13:44.437 "transport_retry_count": 4, 00:13:44.437 "bdev_retry_count": 3, 00:13:44.437 "transport_ack_timeout": 0, 00:13:44.437 "ctrlr_loss_timeout_sec": 0, 00:13:44.437 "reconnect_delay_sec": 0, 00:13:44.437 "fast_io_fail_timeout_sec": 0, 00:13:44.437 "disable_auto_failback": false, 00:13:44.437 "generate_uuids": false, 00:13:44.437 "transport_tos": 0, 00:13:44.437 "nvme_error_stat": false, 00:13:44.437 "rdma_srq_size": 0, 00:13:44.437 "io_path_stat": false, 00:13:44.437 "allow_accel_sequence": false, 00:13:44.437 "rdma_max_cq_size": 0, 00:13:44.437 "rdma_cm_event_timeout_ms": 0, 00:13:44.437 "dhchap_digests": [ 00:13:44.437 "sha256", 00:13:44.437 "sha384", 00:13:44.437 "sha512" 00:13:44.437 ], 00:13:44.437 "dhchap_dhgroups": [ 00:13:44.437 "null", 00:13:44.437 "ffdhe2048", 00:13:44.437 "ffdhe3072", 00:13:44.437 "ffdhe4096", 00:13:44.437 "ffdhe6144", 00:13:44.437 "ffdhe8192" 00:13:44.437 ] 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_nvme_set_hotplug", 00:13:44.437 "params": { 00:13:44.437 "period_us": 100000, 00:13:44.437 "enable": false 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_malloc_create", 00:13:44.437 "params": { 00:13:44.437 "name": "malloc0", 00:13:44.437 "num_blocks": 8192, 00:13:44.437 "block_size": 4096, 00:13:44.437 "physical_block_size": 4096, 00:13:44.437 "uuid": "539caf47-980f-42c0-af24-31a8633419ac", 00:13:44.437 "optimal_io_boundary": 0, 00:13:44.437 "md_size": 0, 00:13:44.437 "dif_type": 0, 00:13:44.437 "dif_is_head_of_md": false, 00:13:44.437 "dif_pi_format": 0 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "bdev_wait_for_examine" 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "scsi", 00:13:44.437 "config": null 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "scheduler", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "framework_set_scheduler", 00:13:44.437 "params": { 00:13:44.437 "name": "static" 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "vhost_scsi", 00:13:44.437 "config": [] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "vhost_blk", 00:13:44.437 "config": [] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "ublk", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "ublk_create_target", 00:13:44.437 "params": { 00:13:44.437 "cpumask": "1" 00:13:44.437 } 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "method": "ublk_start_disk", 00:13:44.437 "params": { 00:13:44.437 "bdev_name": "malloc0", 00:13:44.437 "ublk_id": 0, 00:13:44.437 "num_queues": 1, 00:13:44.437 "queue_depth": 128 00:13:44.437 } 00:13:44.437 } 00:13:44.437 ] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "nbd", 00:13:44.437 "config": [] 00:13:44.437 }, 00:13:44.437 { 00:13:44.437 "subsystem": "nvmf", 00:13:44.437 "config": [ 00:13:44.437 { 00:13:44.437 "method": "nvmf_set_config", 00:13:44.437 "params": { 00:13:44.437 "discovery_filter": "match_any", 00:13:44.437 "admin_cmd_passthru": { 00:13:44.437 "identify_ctrlr": false 00:13:44.437 }, 00:13:44.437 "dhchap_digests": [ 00:13:44.437 "sha256", 00:13:44.437 "sha384", 00:13:44.437 "sha512" 00:13:44.437 ], 00:13:44.437 "dhchap_dhgroups": [ 00:13:44.437 "null", 00:13:44.437 "ffdhe2048", 00:13:44.437 "ffdhe3072", 00:13:44.438 "ffdhe4096", 00:13:44.438 "ffdhe6144", 00:13:44.438 "ffdhe8192" 00:13:44.438 ] 00:13:44.438 } 00:13:44.438 }, 00:13:44.438 { 00:13:44.438 "method": "nvmf_set_max_subsystems", 00:13:44.438 "params": { 00:13:44.438 "max_subsystems": 1024 00:13:44.438 } 00:13:44.438 }, 00:13:44.438 { 00:13:44.438 "method": "nvmf_set_crdt", 00:13:44.438 "params": { 00:13:44.438 "crdt1": 0, 00:13:44.438 "crdt2": 0, 00:13:44.438 "crdt3": 0 00:13:44.438 } 00:13:44.438 } 00:13:44.438 ] 00:13:44.438 }, 00:13:44.438 { 00:13:44.438 "subsystem": "iscsi", 00:13:44.438 "config": [ 00:13:44.438 { 00:13:44.438 "method": "iscsi_set_options", 00:13:44.438 "params": { 00:13:44.438 "node_base": "iqn.2016-06.io.spdk", 00:13:44.438 "max_sessions": 128, 00:13:44.438 "max_connections_per_session": 2, 00:13:44.438 "max_queue_depth": 64, 00:13:44.438 "default_time2wait": 2, 00:13:44.438 "default_time2retain": 20, 00:13:44.438 "first_burst_length": 8192, 00:13:44.438 "immediate_data": true, 00:13:44.438 "allow_duplicated_isid": false, 00:13:44.438 "error_recovery_level": 0, 00:13:44.438 "nop_timeout": 60, 00:13:44.438 "nop_in_interval": 30, 00:13:44.438 "disable_chap": false, 00:13:44.438 "require_chap": false, 00:13:44.438 "mutual_chap": false, 00:13:44.438 "chap_group": 0, 00:13:44.438 "max_large_datain_per_connection": 64, 00:13:44.438 "max_r2t_per_connection": 4, 00:13:44.438 "pdu_pool_size": 36864, 00:13:44.438 "immediate_data_pool_size": 16384, 00:13:44.438 "data_out_pool_size": 2048 00:13:44.438 } 00:13:44.438 } 00:13:44.438 ] 00:13:44.438 } 00:13:44.438 ] 00:13:44.438 }' 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70932 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70932 ']' 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70932 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70932 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.438 killing process with pid 70932 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70932' 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70932 00:13:44.438 06:12:03 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70932 00:13:45.445 [2024-11-20 06:12:04.902237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.445 [2024-11-20 06:12:04.932613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.445 [2024-11-20 06:12:04.932747] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.445 [2024-11-20 06:12:04.937432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.445 [2024-11-20 06:12:04.937506] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.445 [2024-11-20 06:12:04.937519] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.445 [2024-11-20 06:12:04.937546] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:45.445 [2024-11-20 06:12:04.937696] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:47.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70993 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70993 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70993 ']' 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:47.356 06:12:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:47.357 "subsystems": [ 00:13:47.357 { 00:13:47.357 "subsystem": "fsdev", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "fsdev_set_opts", 00:13:47.357 "params": { 00:13:47.357 "fsdev_io_pool_size": 65535, 00:13:47.357 "fsdev_io_cache_size": 256 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "keyring", 00:13:47.357 "config": [] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "iobuf", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "iobuf_set_options", 00:13:47.357 "params": { 00:13:47.357 "small_pool_count": 8192, 00:13:47.357 "large_pool_count": 1024, 00:13:47.357 "small_bufsize": 8192, 00:13:47.357 "large_bufsize": 135168, 00:13:47.357 "enable_numa": false 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "sock", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "sock_set_default_impl", 00:13:47.357 "params": { 00:13:47.357 "impl_name": "posix" 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "sock_impl_set_options", 00:13:47.357 "params": { 00:13:47.357 "impl_name": "ssl", 00:13:47.357 "recv_buf_size": 4096, 00:13:47.357 "send_buf_size": 4096, 00:13:47.357 "enable_recv_pipe": true, 00:13:47.357 "enable_quickack": false, 00:13:47.357 "enable_placement_id": 0, 00:13:47.357 "enable_zerocopy_send_server": true, 00:13:47.357 "enable_zerocopy_send_client": false, 00:13:47.357 "zerocopy_threshold": 0, 00:13:47.357 "tls_version": 0, 00:13:47.357 "enable_ktls": false 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "sock_impl_set_options", 00:13:47.357 "params": { 00:13:47.357 "impl_name": "posix", 00:13:47.357 "recv_buf_size": 2097152, 00:13:47.357 "send_buf_size": 2097152, 00:13:47.357 "enable_recv_pipe": true, 00:13:47.357 "enable_quickack": false, 00:13:47.357 "enable_placement_id": 0, 00:13:47.357 "enable_zerocopy_send_server": true, 00:13:47.357 "enable_zerocopy_send_client": false, 00:13:47.357 "zerocopy_threshold": 0, 00:13:47.357 "tls_version": 0, 00:13:47.357 "enable_ktls": false 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "vmd", 00:13:47.357 "config": [] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "accel", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "accel_set_options", 00:13:47.357 "params": { 00:13:47.357 "small_cache_size": 128, 00:13:47.357 "large_cache_size": 16, 00:13:47.357 "task_count": 2048, 00:13:47.357 "sequence_count": 2048, 00:13:47.357 "buf_count": 2048 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "bdev", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "bdev_set_options", 00:13:47.357 "params": { 00:13:47.357 "bdev_io_pool_size": 65535, 00:13:47.357 "bdev_io_cache_size": 256, 00:13:47.357 "bdev_auto_examine": true, 00:13:47.357 "iobuf_small_cache_size": 128, 00:13:47.357 "iobuf_large_cache_size": 16 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_raid_set_options", 00:13:47.357 "params": { 00:13:47.357 "process_window_size_kb": 1024, 00:13:47.357 "process_max_bandwidth_mb_sec": 0 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_iscsi_set_options", 00:13:47.357 "params": { 00:13:47.357 "timeout_sec": 30 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_nvme_set_options", 00:13:47.357 "params": { 00:13:47.357 "action_on_timeout": "none", 00:13:47.357 "timeout_us": 0, 00:13:47.357 "timeout_admin_us": 0, 00:13:47.357 "keep_alive_timeout_ms": 10000, 00:13:47.357 "arbitration_burst": 0, 00:13:47.357 "low_priority_weight": 0, 00:13:47.357 "medium_priority_weight": 0, 00:13:47.357 "high_priority_weight": 0, 00:13:47.357 "nvme_adminq_poll_period_us": 10000, 00:13:47.357 "nvme_ioq_poll_period_us": 0, 00:13:47.357 "io_queue_requests": 0, 00:13:47.357 "delay_cmd_submit": true, 00:13:47.357 "transport_retry_count": 4, 00:13:47.357 "bdev_retry_count": 3, 00:13:47.357 "transport_ack_timeout": 0, 00:13:47.357 "ctrlr_loss_timeout_sec": 0, 00:13:47.357 "reconnect_delay_sec": 0, 00:13:47.357 "fast_io_fail_timeout_sec": 0, 00:13:47.357 "disable_auto_failback": false, 00:13:47.357 "generate_uuids": false, 00:13:47.357 "transport_tos": 0, 00:13:47.357 "nvme_error_stat": false, 00:13:47.357 "rdma_srq_size": 0, 00:13:47.357 "io_path_stat": false, 00:13:47.357 "allow_accel_sequence": false, 00:13:47.357 "rdma_max_cq_size": 0, 00:13:47.357 "rdma_cm_event_timeout_ms": 0, 00:13:47.357 "dhchap_digests": [ 00:13:47.357 "sha256", 00:13:47.357 "sha384", 00:13:47.357 "sha512" 00:13:47.357 ], 00:13:47.357 "dhchap_dhgroups": [ 00:13:47.357 "null", 00:13:47.357 "ffdhe2048", 00:13:47.357 "ffdhe3072", 00:13:47.357 "ffdhe4096", 00:13:47.357 "ffdhe6144", 00:13:47.357 "ffdhe8192" 00:13:47.357 ] 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_nvme_set_hotplug", 00:13:47.357 "params": { 00:13:47.357 "period_us": 100000, 00:13:47.357 "enable": false 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_malloc_create", 00:13:47.357 "params": { 00:13:47.357 "name": "malloc0", 00:13:47.357 "num_blocks": 8192, 00:13:47.357 "block_size": 4096, 00:13:47.357 "physical_block_size": 4096, 00:13:47.357 "uuid": "539caf47-980f-42c0-af24-31a8633419ac", 00:13:47.357 "optimal_io_boundary": 0, 00:13:47.357 "md_size": 0, 00:13:47.357 "dif_type": 0, 00:13:47.357 "dif_is_head_of_md": false, 00:13:47.357 "dif_pi_format": 0 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "bdev_wait_for_examine" 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "scsi", 00:13:47.357 "config": null 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "scheduler", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "framework_set_scheduler", 00:13:47.357 "params": { 00:13:47.357 "name": "static" 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "vhost_scsi", 00:13:47.357 "config": [] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "vhost_blk", 00:13:47.357 "config": [] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "ublk", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "ublk_create_target", 00:13:47.357 "params": { 00:13:47.357 "cpumask": "1" 00:13:47.357 } 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "method": "ublk_start_disk", 00:13:47.357 "params": { 00:13:47.357 "bdev_name": "malloc0", 00:13:47.357 "ublk_id": 0, 00:13:47.357 "num_queues": 1, 00:13:47.357 "queue_depth": 128 00:13:47.357 } 00:13:47.357 } 00:13:47.357 ] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "nbd", 00:13:47.357 "config": [] 00:13:47.357 }, 00:13:47.357 { 00:13:47.357 "subsystem": "nvmf", 00:13:47.357 "config": [ 00:13:47.357 { 00:13:47.357 "method": "nvmf_set_config", 00:13:47.357 "params": { 00:13:47.357 "discovery_filter": "match_any", 00:13:47.357 "admin_cmd_passthru": { 00:13:47.357 "identify_ctrlr": false 00:13:47.357 }, 00:13:47.357 "dhchap_digests": [ 00:13:47.357 "sha256", 00:13:47.357 "sha384", 00:13:47.357 "sha512" 00:13:47.357 ], 00:13:47.358 "dhchap_dhgroups": [ 00:13:47.358 "null", 00:13:47.358 "ffdhe2048", 00:13:47.358 "ffdhe3072", 00:13:47.358 "ffdhe4096", 00:13:47.358 "ffdhe6144", 00:13:47.358 "ffdhe8192" 00:13:47.358 ] 00:13:47.358 } 00:13:47.358 }, 00:13:47.358 { 00:13:47.358 "method": "nvmf_set_max_subsystems", 00:13:47.358 "params": { 00:13:47.358 "max_subsystems": 1024 00:13:47.358 } 00:13:47.358 }, 00:13:47.358 { 00:13:47.358 "method": "nvmf_set_crdt", 00:13:47.358 "params": { 00:13:47.358 "crdt1": 0, 00:13:47.358 "crdt2": 0, 00:13:47.358 "crdt3": 0 00:13:47.358 } 00:13:47.358 } 00:13:47.358 ] 00:13:47.358 }, 00:13:47.358 { 00:13:47.358 "subsystem": "iscsi", 00:13:47.358 "config": [ 00:13:47.358 { 00:13:47.358 "method": "iscsi_set_options", 00:13:47.358 "params": { 00:13:47.358 "node_base": "iqn.2016-06.io.spdk", 00:13:47.358 "max_sessions": 128, 00:13:47.358 "max_connections_per_session": 2, 00:13:47.358 "max_queue_depth": 64, 00:13:47.358 "default_time2wait": 2, 00:13:47.358 "default_time2retain": 20, 00:13:47.358 "first_burst_length": 8192, 00:13:47.358 "immediate_data": true, 00:13:47.358 "allow_duplicated_isid": false, 00:13:47.358 "error_recovery_level": 0, 00:13:47.358 "nop_timeout": 60, 00:13:47.358 "nop_in_interval": 30, 00:13:47.358 "disable_chap": false, 00:13:47.358 "require_chap": false, 00:13:47.358 "mutual_chap": false, 00:13:47.358 "chap_group": 0, 00:13:47.358 "max_large_datain_per_connection": 64, 00:13:47.358 "max_r2t_per_connection": 4, 00:13:47.358 "pdu_pool_size": 36864, 00:13:47.358 "immediate_data_pool_size": 16384, 00:13:47.358 "data_out_pool_size": 2048 00:13:47.358 } 00:13:47.358 } 00:13:47.358 ] 00:13:47.358 } 00:13:47.358 ] 00:13:47.358 }' 00:13:47.358 [2024-11-20 06:12:06.732850] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:47.358 [2024-11-20 06:12:06.732962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70993 ] 00:13:47.358 [2024-11-20 06:12:06.889545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.615 [2024-11-20 06:12:06.989730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.187 [2024-11-20 06:12:07.758529] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.187 [2024-11-20 06:12:07.759803] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:48.187 [2024-11-20 06:12:07.766703] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:48.187 [2024-11-20 06:12:07.766796] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:48.187 [2024-11-20 06:12:07.766807] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:48.187 [2024-11-20 06:12:07.766814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.187 [2024-11-20 06:12:07.775595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.187 [2024-11-20 06:12:07.775626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.187 [2024-11-20 06:12:07.782524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.187 [2024-11-20 06:12:07.782624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:48.187 [2024-11-20 06:12:07.801576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70993 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70993 ']' 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70993 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70993 00:13:48.503 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:48.504 killing process with pid 70993 00:13:48.504 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:48.504 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70993' 00:13:48.504 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70993 00:13:48.504 06:12:07 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70993 00:13:49.447 [2024-11-20 06:12:09.044195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:49.707 [2024-11-20 06:12:09.080552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:49.707 [2024-11-20 06:12:09.080692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:49.707 [2024-11-20 06:12:09.084760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:49.707 [2024-11-20 06:12:09.084813] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:49.707 [2024-11-20 06:12:09.085083] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:49.707 [2024-11-20 06:12:09.085109] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:49.707 [2024-11-20 06:12:09.085246] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:51.084 06:12:10 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:51.084 ************************************ 00:13:51.084 END TEST test_save_ublk_config 00:13:51.084 ************************************ 00:13:51.084 00:13:51.084 real 0m7.948s 00:13:51.084 user 0m5.414s 00:13:51.084 sys 0m3.145s 00:13:51.084 06:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.084 06:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.084 06:12:10 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71066 00:13:51.084 06:12:10 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.084 06:12:10 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71066 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@833 -- # '[' -z 71066 ']' 00:13:51.084 06:12:10 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.084 06:12:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.084 [2024-11-20 06:12:10.592783] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:13:51.084 [2024-11-20 06:12:10.592910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:13:51.344 [2024-11-20 06:12:10.753715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.344 [2024-11-20 06:12:10.857615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.344 [2024-11-20 06:12:10.857742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.913 06:12:11 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:51.913 06:12:11 ublk -- common/autotest_common.sh@866 -- # return 0 00:13:51.913 06:12:11 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:51.913 06:12:11 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:51.913 06:12:11 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.913 06:12:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.913 ************************************ 00:13:51.913 START TEST test_create_ublk 00:13:51.913 ************************************ 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:13:51.913 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.913 [2024-11-20 06:12:11.471519] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:51.913 [2024-11-20 06:12:11.473439] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.913 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:51.913 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.913 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:52.172 [2024-11-20 06:12:11.671654] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:52.172 [2024-11-20 06:12:11.672025] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:52.172 [2024-11-20 06:12:11.672040] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:52.172 [2024-11-20 06:12:11.672047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.172 [2024-11-20 06:12:11.680705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.172 [2024-11-20 06:12:11.680726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.172 [2024-11-20 06:12:11.686526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.172 [2024-11-20 06:12:11.695579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:52.172 [2024-11-20 06:12:11.721325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:52.172 06:12:11 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:52.172 { 00:13:52.172 "ublk_device": "/dev/ublkb0", 00:13:52.172 "id": 0, 00:13:52.172 "queue_depth": 512, 00:13:52.172 "num_queues": 4, 00:13:52.172 "bdev_name": "Malloc0" 00:13:52.172 } 00:13:52.172 ]' 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:52.172 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:52.434 06:12:11 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:52.434 06:12:11 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:52.434 fio: verification read phase will never start because write phase uses all of runtime 00:13:52.434 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:52.434 fio-3.35 00:13:52.434 Starting 1 process 00:14:04.644 00:14:04.644 fio_test: (groupid=0, jobs=1): err= 0: pid=71109: Wed Nov 20 06:12:22 2024 00:14:04.644 write: IOPS=16.9k, BW=66.1MiB/s (69.3MB/s)(661MiB/10001msec); 0 zone resets 00:14:04.644 clat (usec): min=37, max=10412, avg=58.20, stdev=122.66 00:14:04.644 lat (usec): min=37, max=10423, avg=58.71, stdev=122.74 00:14:04.644 clat percentiles (usec): 00:14:04.644 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:14:04.644 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 52], 00:14:04.644 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 62], 95.00th=[ 67], 00:14:04.644 | 99.00th=[ 233], 99.50th=[ 326], 99.90th=[ 2606], 99.95th=[ 3458], 00:14:04.644 | 99.99th=[ 3916] 00:14:04.644 bw ( KiB/s): min=20512, max=78480, per=99.61%, avg=67404.63, stdev=15257.82, samples=19 00:14:04.644 iops : min= 5128, max=19620, avg=16851.16, stdev=3814.45, samples=19 00:14:04.644 lat (usec) : 50=49.62%, 100=49.15%, 250=0.30%, 500=0.75%, 750=0.01% 00:14:04.644 lat (usec) : 1000=0.01% 00:14:04.644 lat (msec) : 2=0.04%, 4=0.12%, 10=0.01%, 20=0.01% 00:14:04.644 cpu : usr=3.08%, sys=12.83%, ctx=169191, majf=0, minf=797 00:14:04.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.644 issued rwts: total=0,169193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.644 00:14:04.644 Run status group 0 (all jobs): 00:14:04.644 WRITE: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=661MiB (693MB), run=10001-10001msec 00:14:04.644 00:14:04.644 Disk stats (read/write): 00:14:04.644 ublkb0: ios=0/167337, merge=0/0, ticks=0/8070, in_queue=8070, util=99.11% 00:14:04.644 06:12:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.644 [2024-11-20 06:12:22.158198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.644 [2024-11-20 06:12:22.202006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.644 [2024-11-20 06:12:22.202930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.644 [2024-11-20 06:12:22.206170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.644 [2024-11-20 06:12:22.206407] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.644 [2024-11-20 06:12:22.206422] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.644 06:12:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.644 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:22.221587] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:04.645 request: 00:14:04.645 { 00:14:04.645 "ublk_id": 0, 00:14:04.645 "method": "ublk_stop_disk", 00:14:04.645 "req_id": 1 00:14:04.645 } 00:14:04.645 Got JSON-RPC error response 00:14:04.645 response: 00:14:04.645 { 00:14:04.645 "code": -19, 00:14:04.645 "message": "No such device" 00:14:04.645 } 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.645 06:12:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:22.236593] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.645 [2024-11-20 06:12:22.240156] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.645 [2024-11-20 06:12:22.240199] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:04.645 06:12:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.645 00:14:04.645 real 0m11.233s 00:14:04.645 user 0m0.616s 00:14:04.645 sys 0m1.378s 00:14:04.645 ************************************ 00:14:04.645 END TEST test_create_ublk 00:14:04.645 ************************************ 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:22 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:04.645 06:12:22 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:04.645 06:12:22 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.645 06:12:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 ************************************ 00:14:04.645 START TEST test_create_multi_ublk 00:14:04.645 ************************************ 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:22.743507] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.645 [2024-11-20 06:12:22.745158] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:22.959632] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:04.645 [2024-11-20 06:12:22.959944] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:04.645 [2024-11-20 06:12:22.959957] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:04.645 [2024-11-20 06:12:22.959966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.645 [2024-11-20 06:12:22.983528] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.645 [2024-11-20 06:12:22.983561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.645 [2024-11-20 06:12:22.995525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.645 [2024-11-20 06:12:22.996057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:04.645 [2024-11-20 06:12:23.034522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:23.286629] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:04.645 [2024-11-20 06:12:23.286958] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:04.645 [2024-11-20 06:12:23.286974] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:04.645 [2024-11-20 06:12:23.286979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.645 [2024-11-20 06:12:23.310526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.645 [2024-11-20 06:12:23.310549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.645 [2024-11-20 06:12:23.322530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.645 [2024-11-20 06:12:23.323115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:04.645 [2024-11-20 06:12:23.370523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.645 [2024-11-20 06:12:23.589606] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:04.645 [2024-11-20 06:12:23.589915] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:04.645 [2024-11-20 06:12:23.589927] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:04.645 [2024-11-20 06:12:23.589934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.645 [2024-11-20 06:12:23.597522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.645 [2024-11-20 06:12:23.597545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.645 [2024-11-20 06:12:23.605513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.645 [2024-11-20 06:12:23.606032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:04.645 [2024-11-20 06:12:23.614422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.645 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.646 [2024-11-20 06:12:23.781628] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:04.646 [2024-11-20 06:12:23.781934] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:04.646 [2024-11-20 06:12:23.781949] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:04.646 [2024-11-20 06:12:23.781954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.646 [2024-11-20 06:12:23.789529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.646 [2024-11-20 06:12:23.789547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.646 [2024-11-20 06:12:23.797519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.646 [2024-11-20 06:12:23.798039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:04.646 [2024-11-20 06:12:23.806543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:04.646 { 00:14:04.646 "ublk_device": "/dev/ublkb0", 00:14:04.646 "id": 0, 00:14:04.646 "queue_depth": 512, 00:14:04.646 "num_queues": 4, 00:14:04.646 "bdev_name": "Malloc0" 00:14:04.646 }, 00:14:04.646 { 00:14:04.646 "ublk_device": "/dev/ublkb1", 00:14:04.646 "id": 1, 00:14:04.646 "queue_depth": 512, 00:14:04.646 "num_queues": 4, 00:14:04.646 "bdev_name": "Malloc1" 00:14:04.646 }, 00:14:04.646 { 00:14:04.646 "ublk_device": "/dev/ublkb2", 00:14:04.646 "id": 2, 00:14:04.646 "queue_depth": 512, 00:14:04.646 "num_queues": 4, 00:14:04.646 "bdev_name": "Malloc2" 00:14:04.646 }, 00:14:04.646 { 00:14:04.646 "ublk_device": "/dev/ublkb3", 00:14:04.646 "id": 3, 00:14:04.646 "queue_depth": 512, 00:14:04.646 "num_queues": 4, 00:14:04.646 "bdev_name": "Malloc3" 00:14:04.646 } 00:14:04.646 ]' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:04.646 06:12:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.646 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.904 [2024-11-20 06:12:24.485618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.904 [2024-11-20 06:12:24.517003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.904 [2024-11-20 06:12:24.517953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.904 [2024-11-20 06:12:24.525547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.904 [2024-11-20 06:12:24.525794] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.904 [2024-11-20 06:12:24.525808] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.904 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.163 [2024-11-20 06:12:24.541592] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.163 [2024-11-20 06:12:24.580993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.163 [2024-11-20 06:12:24.581944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.163 [2024-11-20 06:12:24.589539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.163 [2024-11-20 06:12:24.589777] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:05.163 [2024-11-20 06:12:24.589790] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.163 [2024-11-20 06:12:24.602597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.163 [2024-11-20 06:12:24.635008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.163 [2024-11-20 06:12:24.635966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.163 [2024-11-20 06:12:24.645527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.163 [2024-11-20 06:12:24.645774] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:05.163 [2024-11-20 06:12:24.645786] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.163 [2024-11-20 06:12:24.661598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.163 [2024-11-20 06:12:24.701010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.163 [2024-11-20 06:12:24.701918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.163 [2024-11-20 06:12:24.709540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.163 [2024-11-20 06:12:24.709768] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:05.163 [2024-11-20 06:12:24.709782] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.163 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:05.421 [2024-11-20 06:12:24.869587] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.421 [2024-11-20 06:12:24.873177] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:05.421 [2024-11-20 06:12:24.873209] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:05.421 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:05.421 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.421 06:12:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:05.421 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.421 06:12:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.679 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.679 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.679 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:05.679 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.679 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.243 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.243 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.243 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:06.244 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.244 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.500 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.500 06:12:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:06.500 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.500 06:12:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:06.500 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:06.758 00:14:06.758 real 0m3.445s 00:14:06.758 user 0m0.791s 00:14:06.758 sys 0m0.148s 00:14:06.758 ************************************ 00:14:06.758 END TEST test_create_multi_ublk 00:14:06.758 ************************************ 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.758 06:12:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 06:12:26 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:06.758 06:12:26 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:06.758 06:12:26 ublk -- ublk/ublk.sh@130 -- # killprocess 71066 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@952 -- # '[' -z 71066 ']' 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@956 -- # kill -0 71066 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@957 -- # uname 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71066 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:06.758 killing process with pid 71066 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71066' 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@971 -- # kill 71066 00:14:06.758 06:12:26 ublk -- common/autotest_common.sh@976 -- # wait 71066 00:14:07.323 [2024-11-20 06:12:26.811680] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.323 [2024-11-20 06:12:26.811729] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.984 00:14:07.984 real 0m25.109s 00:14:07.984 user 0m35.708s 00:14:07.984 sys 0m9.566s 00:14:07.984 06:12:27 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.984 06:12:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.984 ************************************ 00:14:07.984 END TEST ublk 00:14:07.985 ************************************ 00:14:07.985 06:12:27 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.985 06:12:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:07.985 06:12:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.985 06:12:27 -- common/autotest_common.sh@10 -- # set +x 00:14:07.985 ************************************ 00:14:07.985 START TEST ublk_recovery 00:14:07.985 ************************************ 00:14:07.985 06:12:27 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.985 * Looking for test storage... 00:14:07.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:07.985 06:12:27 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.985 06:12:27 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.985 06:12:27 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.244 06:12:27 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.244 06:12:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.245 06:12:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.245 --rc genhtml_branch_coverage=1 00:14:08.245 --rc genhtml_function_coverage=1 00:14:08.245 --rc genhtml_legend=1 00:14:08.245 --rc geninfo_all_blocks=1 00:14:08.245 --rc geninfo_unexecuted_blocks=1 00:14:08.245 00:14:08.245 ' 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.245 --rc genhtml_branch_coverage=1 00:14:08.245 --rc genhtml_function_coverage=1 00:14:08.245 --rc genhtml_legend=1 00:14:08.245 --rc geninfo_all_blocks=1 00:14:08.245 --rc geninfo_unexecuted_blocks=1 00:14:08.245 00:14:08.245 ' 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.245 --rc genhtml_branch_coverage=1 00:14:08.245 --rc genhtml_function_coverage=1 00:14:08.245 --rc genhtml_legend=1 00:14:08.245 --rc geninfo_all_blocks=1 00:14:08.245 --rc geninfo_unexecuted_blocks=1 00:14:08.245 00:14:08.245 ' 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.245 --rc genhtml_branch_coverage=1 00:14:08.245 --rc genhtml_function_coverage=1 00:14:08.245 --rc genhtml_legend=1 00:14:08.245 --rc geninfo_all_blocks=1 00:14:08.245 --rc geninfo_unexecuted_blocks=1 00:14:08.245 00:14:08.245 ' 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:08.245 06:12:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71455 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71455 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71455 ']' 00:14:08.245 06:12:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:08.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:08.245 06:12:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.245 [2024-11-20 06:12:27.721404] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:08.245 [2024-11-20 06:12:27.721534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:14:08.245 [2024-11-20 06:12:27.875951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:08.503 [2024-11-20 06:12:28.007480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.503 [2024-11-20 06:12:28.007523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.068 06:12:28 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:09.068 06:12:28 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:09.068 06:12:28 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:09.068 06:12:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.068 06:12:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 [2024-11-20 06:12:28.528512] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:09.069 [2024-11-20 06:12:28.530130] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.069 06:12:28 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 malloc0 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.069 06:12:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.069 [2024-11-20 06:12:28.616635] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:09.069 [2024-11-20 06:12:28.616725] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:09.069 [2024-11-20 06:12:28.616734] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:09.069 [2024-11-20 06:12:28.616742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:09.069 [2024-11-20 06:12:28.625593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:09.069 [2024-11-20 06:12:28.625610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:09.069 [2024-11-20 06:12:28.632518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:09.069 [2024-11-20 06:12:28.632636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:09.069 [2024-11-20 06:12:28.655520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:09.069 1 00:14:09.069 06:12:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.069 06:12:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:10.439 06:12:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71491 00:14:10.439 06:12:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:10.439 06:12:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:10.439 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.439 fio-3.35 00:14:10.439 Starting 1 process 00:14:15.697 06:12:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71455 00:14:15.697 06:12:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:20.954 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71455 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:20.954 06:12:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:20.954 06:12:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71602 00:14:20.954 06:12:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.954 06:12:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71602 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71602 ']' 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.954 06:12:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.954 [2024-11-20 06:12:39.748370] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:14:20.954 [2024-11-20 06:12:39.748513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:14:20.955 [2024-11-20 06:12:39.923421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.955 [2024-11-20 06:12:40.065055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.955 [2024-11-20 06:12:40.065146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:21.212 06:12:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.212 [2024-11-20 06:12:40.654505] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.212 [2024-11-20 06:12:40.656396] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.212 06:12:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.212 malloc0 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.212 06:12:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.212 [2024-11-20 06:12:40.756660] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:21.212 [2024-11-20 06:12:40.756705] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:21.212 [2024-11-20 06:12:40.756715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:21.212 [2024-11-20 06:12:40.765548] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:21.212 [2024-11-20 06:12:40.765579] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:21.212 1 00:14:21.212 06:12:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.212 06:12:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71491 00:14:22.144 [2024-11-20 06:12:41.765619] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.144 [2024-11-20 06:12:41.771538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.144 [2024-11-20 06:12:41.771561] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.519 [2024-11-20 06:12:42.771600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.519 [2024-11-20 06:12:42.781519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.519 [2024-11-20 06:12:42.781550] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:24.456 [2024-11-20 06:12:43.781577] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:24.456 [2024-11-20 06:12:43.789514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:24.456 [2024-11-20 06:12:43.789537] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:24.456 [2024-11-20 06:12:43.789546] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:24.456 [2024-11-20 06:12:43.789624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:46.451 [2024-11-20 06:13:05.074520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:46.451 [2024-11-20 06:13:05.077806] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:46.451 [2024-11-20 06:13:05.087513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:46.451 [2024-11-20 06:13:05.087532] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:13.013 00:15:13.013 fio_test: (groupid=0, jobs=1): err= 0: pid=71500: Wed Nov 20 06:13:29 2024 00:15:13.013 read: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(3279MiB/60002msec) 00:15:13.013 slat (nsec): min=904, max=268126, avg=5017.47, stdev=1907.84 00:15:13.013 clat (usec): min=811, max=30429k, avg=4209.10, stdev=246282.28 00:15:13.013 lat (usec): min=815, max=30429k, avg=4214.12, stdev=246282.28 00:15:13.013 clat percentiles (usec): 00:15:13.013 | 1.00th=[ 1663], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1860], 00:15:13.013 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1942], 60.00th=[ 1975], 00:15:13.013 | 70.00th=[ 2008], 80.00th=[ 2311], 90.00th=[ 2573], 95.00th=[ 4080], 00:15:13.013 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[12518], 99.95th=[12911], 00:15:13.013 | 99.99th=[13435] 00:15:13.013 bw ( KiB/s): min=36624, max=130584, per=100.00%, avg=111841.44, stdev=25348.41, samples=59 00:15:13.013 iops : min= 9156, max=32646, avg=27960.36, stdev=6337.11, samples=59 00:15:13.013 write: IOPS=14.0k, BW=54.6MiB/s (57.2MB/s)(3275MiB/60002msec); 0 zone resets 00:15:13.013 slat (nsec): min=941, max=340360, avg=5041.85, stdev=1989.51 00:15:13.013 clat (usec): min=583, max=30429k, avg=4935.94, stdev=283904.89 00:15:13.013 lat (usec): min=588, max=30429k, avg=4940.99, stdev=283904.89 00:15:13.013 clat percentiles (usec): 00:15:13.013 | 1.00th=[ 1696], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1942], 00:15:13.013 | 30.00th=[ 1975], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2057], 00:15:13.013 | 70.00th=[ 2089], 80.00th=[ 2343], 90.00th=[ 2606], 95.00th=[ 3982], 00:15:13.013 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[12387], 99.95th=[13042], 00:15:13.013 | 99.99th=[13698] 00:15:13.013 bw ( KiB/s): min=36064, max=129696, per=100.00%, avg=111690.59, stdev=25432.27, samples=59 00:15:13.013 iops : min= 9016, max=32424, avg=27922.64, stdev=6358.07, samples=59 00:15:13.013 lat (usec) : 750=0.01%, 1000=0.01% 00:15:13.013 lat (msec) : 2=54.20%, 4=40.76%, 10=4.92%, 20=0.11%, >=2000=0.01% 00:15:13.013 cpu : usr=3.19%, sys=14.41%, ctx=57827, majf=0, minf=14 00:15:13.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:13.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:13.013 issued rwts: total=839344,838317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:13.013 00:15:13.013 Run status group 0 (all jobs): 00:15:13.013 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=3279MiB (3438MB), run=60002-60002msec 00:15:13.013 WRITE: bw=54.6MiB/s (57.2MB/s), 54.6MiB/s-54.6MiB/s (57.2MB/s-57.2MB/s), io=3275MiB (3434MB), run=60002-60002msec 00:15:13.013 00:15:13.013 Disk stats (read/write): 00:15:13.013 ublkb1: ios=835866/834838, merge=0/0, ticks=3474697/4012960, in_queue=7487658, util=99.90% 00:15:13.013 06:13:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 [2024-11-20 06:13:29.915745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.013 [2024-11-20 06:13:29.954606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.013 [2024-11-20 06:13:29.954746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.013 [2024-11-20 06:13:29.962521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.013 [2024-11-20 06:13:29.962609] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:13.013 [2024-11-20 06:13:29.962616] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.013 06:13:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 [2024-11-20 06:13:29.977594] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.013 [2024-11-20 06:13:29.981241] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:13.013 [2024-11-20 06:13:29.981276] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.013 06:13:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:13.013 06:13:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:13.013 06:13:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71602 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71602 ']' 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71602 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:13.013 06:13:29 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71602 00:15:13.013 06:13:30 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:13.013 06:13:30 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:13.013 killing process with pid 71602 00:15:13.013 06:13:30 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71602' 00:15:13.013 06:13:30 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71602 00:15:13.013 06:13:30 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71602 00:15:13.013 [2024-11-20 06:13:31.152531] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.013 [2024-11-20 06:13:31.152588] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:13.013 00:15:13.013 real 1m4.631s 00:15:13.013 user 1m47.068s 00:15:13.013 sys 0m22.016s 00:15:13.013 06:13:32 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.013 ************************************ 00:15:13.013 END TEST ublk_recovery 00:15:13.013 ************************************ 00:15:13.013 06:13:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 06:13:32 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:13.013 06:13:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.013 06:13:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 06:13:32 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:13.013 06:13:32 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.013 06:13:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:13.013 06:13:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.013 06:13:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.013 ************************************ 00:15:13.013 START TEST ftl 00:15:13.013 ************************************ 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.013 * Looking for test storage... 00:15:13.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.013 06:13:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.013 06:13:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.013 06:13:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.013 06:13:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.013 06:13:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.013 06:13:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:13.013 06:13:32 ftl -- scripts/common.sh@345 -- # : 1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.013 06:13:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.013 06:13:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@353 -- # local d=1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.013 06:13:32 ftl -- scripts/common.sh@355 -- # echo 1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.013 06:13:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@353 -- # local d=2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.013 06:13:32 ftl -- scripts/common.sh@355 -- # echo 2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.013 06:13:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.013 06:13:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.013 06:13:32 ftl -- scripts/common.sh@368 -- # return 0 00:15:13.013 06:13:32 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.014 06:13:32 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.014 --rc genhtml_branch_coverage=1 00:15:13.014 --rc genhtml_function_coverage=1 00:15:13.014 --rc genhtml_legend=1 00:15:13.014 --rc geninfo_all_blocks=1 00:15:13.014 --rc geninfo_unexecuted_blocks=1 00:15:13.014 00:15:13.014 ' 00:15:13.014 06:13:32 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.014 --rc genhtml_branch_coverage=1 00:15:13.014 --rc genhtml_function_coverage=1 00:15:13.014 --rc genhtml_legend=1 00:15:13.014 --rc geninfo_all_blocks=1 00:15:13.014 --rc geninfo_unexecuted_blocks=1 00:15:13.014 00:15:13.014 ' 00:15:13.014 06:13:32 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.014 --rc genhtml_branch_coverage=1 00:15:13.014 --rc genhtml_function_coverage=1 00:15:13.014 --rc genhtml_legend=1 00:15:13.014 --rc geninfo_all_blocks=1 00:15:13.014 --rc geninfo_unexecuted_blocks=1 00:15:13.014 00:15:13.014 ' 00:15:13.014 06:13:32 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.014 --rc genhtml_branch_coverage=1 00:15:13.014 --rc genhtml_function_coverage=1 00:15:13.014 --rc genhtml_legend=1 00:15:13.014 --rc geninfo_all_blocks=1 00:15:13.014 --rc geninfo_unexecuted_blocks=1 00:15:13.014 00:15:13.014 ' 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:13.014 06:13:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:13.014 06:13:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.014 06:13:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:13.014 06:13:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:13.014 06:13:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:13.014 06:13:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.014 06:13:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.014 06:13:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.014 06:13:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:13.014 06:13:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:13.014 06:13:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:13.014 06:13:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:13.014 06:13:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.014 06:13:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.014 06:13:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:13.014 06:13:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:13.014 06:13:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:13.014 06:13:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:13.014 06:13:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:13.014 06:13:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:13.014 06:13:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:13.014 06:13:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:13.014 06:13:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:13.014 06:13:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:13.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:13.272 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:13.272 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:13.272 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:13.272 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:13.272 06:13:32 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72410 00:15:13.272 06:13:32 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72410 00:15:13.272 06:13:32 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:13.272 06:13:32 ftl -- common/autotest_common.sh@833 -- # '[' -z 72410 ']' 00:15:13.272 06:13:32 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.272 06:13:32 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.272 06:13:32 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.273 06:13:32 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.273 06:13:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:13.273 [2024-11-20 06:13:32.830666] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:13.273 [2024-11-20 06:13:32.830796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72410 ] 00:15:13.532 [2024-11-20 06:13:32.987916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.532 [2024-11-20 06:13:33.088306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.098 06:13:33 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:14.098 06:13:33 ftl -- common/autotest_common.sh@866 -- # return 0 00:15:14.098 06:13:33 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:14.357 06:13:33 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:15.291 06:13:34 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:15.291 06:13:34 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:15.550 06:13:35 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:15.550 06:13:35 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:15.550 06:13:35 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@50 -- # break 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:15.809 06:13:35 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:16.067 06:13:35 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:16.067 06:13:35 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:16.067 06:13:35 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:16.067 06:13:35 ftl -- ftl/ftl.sh@63 -- # break 00:15:16.067 06:13:35 ftl -- ftl/ftl.sh@66 -- # killprocess 72410 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@952 -- # '[' -z 72410 ']' 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@956 -- # kill -0 72410 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@957 -- # uname 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72410 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:16.067 killing process with pid 72410 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72410' 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@971 -- # kill 72410 00:15:16.067 06:13:35 ftl -- common/autotest_common.sh@976 -- # wait 72410 00:15:17.442 06:13:36 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:17.442 06:13:36 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:17.442 06:13:36 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:17.442 06:13:36 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.442 06:13:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:17.442 ************************************ 00:15:17.442 START TEST ftl_fio_basic 00:15:17.442 ************************************ 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:17.442 * Looking for test storage... 00:15:17.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.442 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.443 --rc genhtml_branch_coverage=1 00:15:17.443 --rc genhtml_function_coverage=1 00:15:17.443 --rc genhtml_legend=1 00:15:17.443 --rc geninfo_all_blocks=1 00:15:17.443 --rc geninfo_unexecuted_blocks=1 00:15:17.443 00:15:17.443 ' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.443 --rc genhtml_branch_coverage=1 00:15:17.443 --rc genhtml_function_coverage=1 00:15:17.443 --rc genhtml_legend=1 00:15:17.443 --rc geninfo_all_blocks=1 00:15:17.443 --rc geninfo_unexecuted_blocks=1 00:15:17.443 00:15:17.443 ' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.443 --rc genhtml_branch_coverage=1 00:15:17.443 --rc genhtml_function_coverage=1 00:15:17.443 --rc genhtml_legend=1 00:15:17.443 --rc geninfo_all_blocks=1 00:15:17.443 --rc geninfo_unexecuted_blocks=1 00:15:17.443 00:15:17.443 ' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.443 --rc genhtml_branch_coverage=1 00:15:17.443 --rc genhtml_function_coverage=1 00:15:17.443 --rc genhtml_legend=1 00:15:17.443 --rc geninfo_all_blocks=1 00:15:17.443 --rc geninfo_unexecuted_blocks=1 00:15:17.443 00:15:17.443 ' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72542 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72542 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72542 ']' 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.443 06:13:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 [2024-11-20 06:13:36.939194] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:15:17.443 [2024-11-20 06:13:36.939289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72542 ] 00:15:17.702 [2024-11-20 06:13:37.086824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.702 [2024-11-20 06:13:37.172170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.702 [2024-11-20 06:13:37.172592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.702 [2024-11-20 06:13:37.172617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:18.268 06:13:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:18.526 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:18.784 { 00:15:18.784 "name": "nvme0n1", 00:15:18.784 "aliases": [ 00:15:18.784 "81ea7308-9428-4323-a425-e8d662f3ddc1" 00:15:18.784 ], 00:15:18.784 "product_name": "NVMe disk", 00:15:18.784 "block_size": 4096, 00:15:18.784 "num_blocks": 1310720, 00:15:18.784 "uuid": "81ea7308-9428-4323-a425-e8d662f3ddc1", 00:15:18.784 "numa_id": -1, 00:15:18.784 "assigned_rate_limits": { 00:15:18.784 "rw_ios_per_sec": 0, 00:15:18.784 "rw_mbytes_per_sec": 0, 00:15:18.784 "r_mbytes_per_sec": 0, 00:15:18.784 "w_mbytes_per_sec": 0 00:15:18.784 }, 00:15:18.784 "claimed": false, 00:15:18.784 "zoned": false, 00:15:18.784 "supported_io_types": { 00:15:18.784 "read": true, 00:15:18.784 "write": true, 00:15:18.784 "unmap": true, 00:15:18.784 "flush": true, 00:15:18.784 "reset": true, 00:15:18.784 "nvme_admin": true, 00:15:18.784 "nvme_io": true, 00:15:18.784 "nvme_io_md": false, 00:15:18.784 "write_zeroes": true, 00:15:18.784 "zcopy": false, 00:15:18.784 "get_zone_info": false, 00:15:18.784 "zone_management": false, 00:15:18.784 "zone_append": false, 00:15:18.784 "compare": true, 00:15:18.784 "compare_and_write": false, 00:15:18.784 "abort": true, 00:15:18.784 "seek_hole": false, 00:15:18.784 "seek_data": false, 00:15:18.784 "copy": true, 00:15:18.784 "nvme_iov_md": false 00:15:18.784 }, 00:15:18.784 "driver_specific": { 00:15:18.784 "nvme": [ 00:15:18.784 { 00:15:18.784 "pci_address": "0000:00:11.0", 00:15:18.784 "trid": { 00:15:18.784 "trtype": "PCIe", 00:15:18.784 "traddr": "0000:00:11.0" 00:15:18.784 }, 00:15:18.784 "ctrlr_data": { 00:15:18.784 "cntlid": 0, 00:15:18.784 "vendor_id": "0x1b36", 00:15:18.784 "model_number": "QEMU NVMe Ctrl", 00:15:18.784 "serial_number": "12341", 00:15:18.784 "firmware_revision": "8.0.0", 00:15:18.784 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:18.784 "oacs": { 00:15:18.784 "security": 0, 00:15:18.784 "format": 1, 00:15:18.784 "firmware": 0, 00:15:18.784 "ns_manage": 1 00:15:18.784 }, 00:15:18.784 "multi_ctrlr": false, 00:15:18.784 "ana_reporting": false 00:15:18.784 }, 00:15:18.784 "vs": { 00:15:18.784 "nvme_version": "1.4" 00:15:18.784 }, 00:15:18.784 "ns_data": { 00:15:18.784 "id": 1, 00:15:18.784 "can_share": false 00:15:18.784 } 00:15:18.784 } 00:15:18.784 ], 00:15:18.784 "mp_policy": "active_passive" 00:15:18.784 } 00:15:18.784 } 00:15:18.784 ]' 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:18.784 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:19.042 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:19.042 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ec0e4b03-088f-4fde-85f5-7601425bd75f 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec0e4b03-088f-4fde-85f5-7601425bd75f 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:19.300 06:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:19.559 { 00:15:19.559 "name": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:19.559 "aliases": [ 00:15:19.559 "lvs/nvme0n1p0" 00:15:19.559 ], 00:15:19.559 "product_name": "Logical Volume", 00:15:19.559 "block_size": 4096, 00:15:19.559 "num_blocks": 26476544, 00:15:19.559 "uuid": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:19.559 "assigned_rate_limits": { 00:15:19.559 "rw_ios_per_sec": 0, 00:15:19.559 "rw_mbytes_per_sec": 0, 00:15:19.559 "r_mbytes_per_sec": 0, 00:15:19.559 "w_mbytes_per_sec": 0 00:15:19.559 }, 00:15:19.559 "claimed": false, 00:15:19.559 "zoned": false, 00:15:19.559 "supported_io_types": { 00:15:19.559 "read": true, 00:15:19.559 "write": true, 00:15:19.559 "unmap": true, 00:15:19.559 "flush": false, 00:15:19.559 "reset": true, 00:15:19.559 "nvme_admin": false, 00:15:19.559 "nvme_io": false, 00:15:19.559 "nvme_io_md": false, 00:15:19.559 "write_zeroes": true, 00:15:19.559 "zcopy": false, 00:15:19.559 "get_zone_info": false, 00:15:19.559 "zone_management": false, 00:15:19.559 "zone_append": false, 00:15:19.559 "compare": false, 00:15:19.559 "compare_and_write": false, 00:15:19.559 "abort": false, 00:15:19.559 "seek_hole": true, 00:15:19.559 "seek_data": true, 00:15:19.559 "copy": false, 00:15:19.559 "nvme_iov_md": false 00:15:19.559 }, 00:15:19.559 "driver_specific": { 00:15:19.559 "lvol": { 00:15:19.559 "lvol_store_uuid": "ec0e4b03-088f-4fde-85f5-7601425bd75f", 00:15:19.559 "base_bdev": "nvme0n1", 00:15:19.559 "thin_provision": true, 00:15:19.559 "num_allocated_clusters": 0, 00:15:19.559 "snapshot": false, 00:15:19.559 "clone": false, 00:15:19.559 "esnap_clone": false 00:15:19.559 } 00:15:19.559 } 00:15:19.559 } 00:15:19.559 ]' 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:19.559 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:19.816 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:20.074 { 00:15:20.074 "name": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:20.074 "aliases": [ 00:15:20.074 "lvs/nvme0n1p0" 00:15:20.074 ], 00:15:20.074 "product_name": "Logical Volume", 00:15:20.074 "block_size": 4096, 00:15:20.074 "num_blocks": 26476544, 00:15:20.074 "uuid": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:20.074 "assigned_rate_limits": { 00:15:20.074 "rw_ios_per_sec": 0, 00:15:20.074 "rw_mbytes_per_sec": 0, 00:15:20.074 "r_mbytes_per_sec": 0, 00:15:20.074 "w_mbytes_per_sec": 0 00:15:20.074 }, 00:15:20.074 "claimed": false, 00:15:20.074 "zoned": false, 00:15:20.074 "supported_io_types": { 00:15:20.074 "read": true, 00:15:20.074 "write": true, 00:15:20.074 "unmap": true, 00:15:20.074 "flush": false, 00:15:20.074 "reset": true, 00:15:20.074 "nvme_admin": false, 00:15:20.074 "nvme_io": false, 00:15:20.074 "nvme_io_md": false, 00:15:20.074 "write_zeroes": true, 00:15:20.074 "zcopy": false, 00:15:20.074 "get_zone_info": false, 00:15:20.074 "zone_management": false, 00:15:20.074 "zone_append": false, 00:15:20.074 "compare": false, 00:15:20.074 "compare_and_write": false, 00:15:20.074 "abort": false, 00:15:20.074 "seek_hole": true, 00:15:20.074 "seek_data": true, 00:15:20.074 "copy": false, 00:15:20.074 "nvme_iov_md": false 00:15:20.074 }, 00:15:20.074 "driver_specific": { 00:15:20.074 "lvol": { 00:15:20.074 "lvol_store_uuid": "ec0e4b03-088f-4fde-85f5-7601425bd75f", 00:15:20.074 "base_bdev": "nvme0n1", 00:15:20.074 "thin_provision": true, 00:15:20.074 "num_allocated_clusters": 0, 00:15:20.074 "snapshot": false, 00:15:20.074 "clone": false, 00:15:20.074 "esnap_clone": false 00:15:20.074 } 00:15:20.074 } 00:15:20.074 } 00:15:20.074 ]' 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:20.074 06:13:39 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:20.332 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:20.332 06:13:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e6e78c5a-db4d-416e-ae0f-af306deadf7d 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:20.590 { 00:15:20.590 "name": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:20.590 "aliases": [ 00:15:20.590 "lvs/nvme0n1p0" 00:15:20.590 ], 00:15:20.590 "product_name": "Logical Volume", 00:15:20.590 "block_size": 4096, 00:15:20.590 "num_blocks": 26476544, 00:15:20.590 "uuid": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:20.590 "assigned_rate_limits": { 00:15:20.590 "rw_ios_per_sec": 0, 00:15:20.590 "rw_mbytes_per_sec": 0, 00:15:20.590 "r_mbytes_per_sec": 0, 00:15:20.590 "w_mbytes_per_sec": 0 00:15:20.590 }, 00:15:20.590 "claimed": false, 00:15:20.590 "zoned": false, 00:15:20.590 "supported_io_types": { 00:15:20.590 "read": true, 00:15:20.590 "write": true, 00:15:20.590 "unmap": true, 00:15:20.590 "flush": false, 00:15:20.590 "reset": true, 00:15:20.590 "nvme_admin": false, 00:15:20.590 "nvme_io": false, 00:15:20.590 "nvme_io_md": false, 00:15:20.590 "write_zeroes": true, 00:15:20.590 "zcopy": false, 00:15:20.590 "get_zone_info": false, 00:15:20.590 "zone_management": false, 00:15:20.590 "zone_append": false, 00:15:20.590 "compare": false, 00:15:20.590 "compare_and_write": false, 00:15:20.590 "abort": false, 00:15:20.590 "seek_hole": true, 00:15:20.590 "seek_data": true, 00:15:20.590 "copy": false, 00:15:20.590 "nvme_iov_md": false 00:15:20.590 }, 00:15:20.590 "driver_specific": { 00:15:20.590 "lvol": { 00:15:20.590 "lvol_store_uuid": "ec0e4b03-088f-4fde-85f5-7601425bd75f", 00:15:20.590 "base_bdev": "nvme0n1", 00:15:20.590 "thin_provision": true, 00:15:20.590 "num_allocated_clusters": 0, 00:15:20.590 "snapshot": false, 00:15:20.590 "clone": false, 00:15:20.590 "esnap_clone": false 00:15:20.590 } 00:15:20.590 } 00:15:20.590 } 00:15:20.590 ]' 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:20.590 06:13:40 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e6e78c5a-db4d-416e-ae0f-af306deadf7d -c nvc0n1p0 --l2p_dram_limit 60 00:15:20.849 [2024-11-20 06:13:40.325125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.325173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:20.849 [2024-11-20 06:13:40.325186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:20.849 [2024-11-20 06:13:40.325193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.325244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.325253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.849 [2024-11-20 06:13:40.325261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:20.849 [2024-11-20 06:13:40.325267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.325301] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:20.849 [2024-11-20 06:13:40.325921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:20.849 [2024-11-20 06:13:40.325947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.325954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.849 [2024-11-20 06:13:40.325963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:15:20.849 [2024-11-20 06:13:40.325969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.326030] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 084f459b-ab3a-4d01-8976-08c46ae2a22b 00:15:20.849 [2024-11-20 06:13:40.327114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.327146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:20.849 [2024-11-20 06:13:40.327155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:20.849 [2024-11-20 06:13:40.327162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.332098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.332127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.849 [2024-11-20 06:13:40.332135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:15:20.849 [2024-11-20 06:13:40.332144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.332228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.332237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.849 [2024-11-20 06:13:40.332245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:20.849 [2024-11-20 06:13:40.332254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.332296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.332305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:20.849 [2024-11-20 06:13:40.332311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:20.849 [2024-11-20 06:13:40.332319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.332340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:20.849 [2024-11-20 06:13:40.335385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.335409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.849 [2024-11-20 06:13:40.335420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:15:20.849 [2024-11-20 06:13:40.335429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.335461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.335468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:20.849 [2024-11-20 06:13:40.335477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:20.849 [2024-11-20 06:13:40.335484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.335528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:20.849 [2024-11-20 06:13:40.335648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:20.849 [2024-11-20 06:13:40.335666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:20.849 [2024-11-20 06:13:40.335675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:20.849 [2024-11-20 06:13:40.335685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:20.849 [2024-11-20 06:13:40.335693] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:20.849 [2024-11-20 06:13:40.335700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:20.849 [2024-11-20 06:13:40.335707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:20.849 [2024-11-20 06:13:40.335715] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:20.849 [2024-11-20 06:13:40.335720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:20.849 [2024-11-20 06:13:40.335728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.335736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:20.849 [2024-11-20 06:13:40.335744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:15:20.849 [2024-11-20 06:13:40.335750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.335825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.849 [2024-11-20 06:13:40.335831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:20.849 [2024-11-20 06:13:40.335838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:20.849 [2024-11-20 06:13:40.335844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.849 [2024-11-20 06:13:40.335939] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:20.849 [2024-11-20 06:13:40.335950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:20.849 [2024-11-20 06:13:40.335960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.849 [2024-11-20 06:13:40.335966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.849 [2024-11-20 06:13:40.335974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:20.849 [2024-11-20 06:13:40.335979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:20.849 [2024-11-20 06:13:40.335986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:20.849 [2024-11-20 06:13:40.335991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:20.849 [2024-11-20 06:13:40.335998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.849 [2024-11-20 06:13:40.336010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:20.849 [2024-11-20 06:13:40.336015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:20.849 [2024-11-20 06:13:40.336022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.849 [2024-11-20 06:13:40.336027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:20.849 [2024-11-20 06:13:40.336034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:20.849 [2024-11-20 06:13:40.336039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:20.849 [2024-11-20 06:13:40.336054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:20.849 [2024-11-20 06:13:40.336060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:20.849 [2024-11-20 06:13:40.336073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:20.849 [2024-11-20 06:13:40.336084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:20.849 [2024-11-20 06:13:40.336090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:20.849 [2024-11-20 06:13:40.336102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:20.849 [2024-11-20 06:13:40.336108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:20.849 [2024-11-20 06:13:40.336120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:20.849 [2024-11-20 06:13:40.336126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:20.849 [2024-11-20 06:13:40.336140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:20.849 [2024-11-20 06:13:40.336148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:20.849 [2024-11-20 06:13:40.336154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.849 [2024-11-20 06:13:40.336160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:20.849 [2024-11-20 06:13:40.336175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:20.849 [2024-11-20 06:13:40.336182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.849 [2024-11-20 06:13:40.336188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:20.850 [2024-11-20 06:13:40.336195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:20.850 [2024-11-20 06:13:40.336200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.850 [2024-11-20 06:13:40.336207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:20.850 [2024-11-20 06:13:40.336212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:20.850 [2024-11-20 06:13:40.336220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.850 [2024-11-20 06:13:40.336225] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:20.850 [2024-11-20 06:13:40.336233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:20.850 [2024-11-20 06:13:40.336238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.850 [2024-11-20 06:13:40.336245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.850 [2024-11-20 06:13:40.336251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:20.850 [2024-11-20 06:13:40.336260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:20.850 [2024-11-20 06:13:40.336265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:20.850 [2024-11-20 06:13:40.336273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:20.850 [2024-11-20 06:13:40.336278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:20.850 [2024-11-20 06:13:40.336285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:20.850 [2024-11-20 06:13:40.336294] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:20.850 [2024-11-20 06:13:40.336302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:20.850 [2024-11-20 06:13:40.336316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:20.850 [2024-11-20 06:13:40.336322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:20.850 [2024-11-20 06:13:40.336336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:20.850 [2024-11-20 06:13:40.336342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:20.850 [2024-11-20 06:13:40.336349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:20.850 [2024-11-20 06:13:40.336355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:20.850 [2024-11-20 06:13:40.336364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:20.850 [2024-11-20 06:13:40.336369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:20.850 [2024-11-20 06:13:40.336378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:20.850 [2024-11-20 06:13:40.336412] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:20.850 [2024-11-20 06:13:40.336419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:20.850 [2024-11-20 06:13:40.336434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:20.850 [2024-11-20 06:13:40.336440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:20.850 [2024-11-20 06:13:40.336447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:20.850 [2024-11-20 06:13:40.336453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.850 [2024-11-20 06:13:40.336461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:20.850 [2024-11-20 06:13:40.336467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:15:20.850 [2024-11-20 06:13:40.336473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.850 [2024-11-20 06:13:40.336539] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:20.850 [2024-11-20 06:13:40.336551] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:23.377 [2024-11-20 06:13:42.852437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.852510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:23.377 [2024-11-20 06:13:42.852525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2515.886 ms 00:15:23.377 [2024-11-20 06:13:42.852535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.878693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.878758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:23.377 [2024-11-20 06:13:42.878776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.949 ms 00:15:23.377 [2024-11-20 06:13:42.878789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.878961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.878983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:23.377 [2024-11-20 06:13:42.878993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:23.377 [2024-11-20 06:13:42.879005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.929110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.929225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:23.377 [2024-11-20 06:13:42.929257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.051 ms 00:15:23.377 [2024-11-20 06:13:42.929288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.929381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.929422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:23.377 [2024-11-20 06:13:42.929452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:23.377 [2024-11-20 06:13:42.929483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.930144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.930220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:23.377 [2024-11-20 06:13:42.930250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:15:23.377 [2024-11-20 06:13:42.930273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.930614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.930664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:23.377 [2024-11-20 06:13:42.930687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:15:23.377 [2024-11-20 06:13:42.930775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.945601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.945636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:23.377 [2024-11-20 06:13:42.945646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.747 ms 00:15:23.377 [2024-11-20 06:13:42.945656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.377 [2024-11-20 06:13:42.957233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:23.377 [2024-11-20 06:13:42.972065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.377 [2024-11-20 06:13:42.972117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:23.377 [2024-11-20 06:13:42.972133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.315 ms 00:15:23.377 [2024-11-20 06:13:42.972141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.019387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.019444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:23.635 [2024-11-20 06:13:43.019462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.202 ms 00:15:23.635 [2024-11-20 06:13:43.019470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.019667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.019678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:23.635 [2024-11-20 06:13:43.019690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:15:23.635 [2024-11-20 06:13:43.019698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.043655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.043699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:23.635 [2024-11-20 06:13:43.043713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.896 ms 00:15:23.635 [2024-11-20 06:13:43.043722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.066663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.066699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:23.635 [2024-11-20 06:13:43.066724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.891 ms 00:15:23.635 [2024-11-20 06:13:43.066735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.067422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.067450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:23.635 [2024-11-20 06:13:43.067462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:15:23.635 [2024-11-20 06:13:43.067470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.131786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.131831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:23.635 [2024-11-20 06:13:43.131851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.256 ms 00:15:23.635 [2024-11-20 06:13:43.131860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.155933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.155974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:23.635 [2024-11-20 06:13:43.155989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.987 ms 00:15:23.635 [2024-11-20 06:13:43.155997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.178654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.178691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:23.635 [2024-11-20 06:13:43.178704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.615 ms 00:15:23.635 [2024-11-20 06:13:43.178723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.201503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.201557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:23.635 [2024-11-20 06:13:43.201572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.722 ms 00:15:23.635 [2024-11-20 06:13:43.201581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.201633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.201643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:23.635 [2024-11-20 06:13:43.201657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:23.635 [2024-11-20 06:13:43.201665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.201752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.635 [2024-11-20 06:13:43.201769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:23.635 [2024-11-20 06:13:43.201779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:23.635 [2024-11-20 06:13:43.201787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.635 [2024-11-20 06:13:43.202781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2877.187 ms, result 0 00:15:23.635 { 00:15:23.635 "name": "ftl0", 00:15:23.635 "uuid": "084f459b-ab3a-4d01-8976-08c46ae2a22b" 00:15:23.635 } 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.635 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:23.893 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:24.151 [ 00:15:24.151 { 00:15:24.151 "name": "ftl0", 00:15:24.151 "aliases": [ 00:15:24.151 "084f459b-ab3a-4d01-8976-08c46ae2a22b" 00:15:24.151 ], 00:15:24.151 "product_name": "FTL disk", 00:15:24.151 "block_size": 4096, 00:15:24.151 "num_blocks": 20971520, 00:15:24.151 "uuid": "084f459b-ab3a-4d01-8976-08c46ae2a22b", 00:15:24.151 "assigned_rate_limits": { 00:15:24.151 "rw_ios_per_sec": 0, 00:15:24.151 "rw_mbytes_per_sec": 0, 00:15:24.151 "r_mbytes_per_sec": 0, 00:15:24.151 "w_mbytes_per_sec": 0 00:15:24.151 }, 00:15:24.151 "claimed": false, 00:15:24.151 "zoned": false, 00:15:24.151 "supported_io_types": { 00:15:24.151 "read": true, 00:15:24.151 "write": true, 00:15:24.151 "unmap": true, 00:15:24.151 "flush": true, 00:15:24.151 "reset": false, 00:15:24.151 "nvme_admin": false, 00:15:24.151 "nvme_io": false, 00:15:24.151 "nvme_io_md": false, 00:15:24.151 "write_zeroes": true, 00:15:24.151 "zcopy": false, 00:15:24.151 "get_zone_info": false, 00:15:24.151 "zone_management": false, 00:15:24.151 "zone_append": false, 00:15:24.151 "compare": false, 00:15:24.151 "compare_and_write": false, 00:15:24.151 "abort": false, 00:15:24.151 "seek_hole": false, 00:15:24.151 "seek_data": false, 00:15:24.151 "copy": false, 00:15:24.151 "nvme_iov_md": false 00:15:24.151 }, 00:15:24.151 "driver_specific": { 00:15:24.151 "ftl": { 00:15:24.151 "base_bdev": "e6e78c5a-db4d-416e-ae0f-af306deadf7d", 00:15:24.151 "cache": "nvc0n1p0" 00:15:24.151 } 00:15:24.151 } 00:15:24.151 } 00:15:24.151 ] 00:15:24.151 06:13:43 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:24.151 06:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:24.151 06:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:24.409 06:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:24.409 06:13:43 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:24.409 [2024-11-20 06:13:44.007460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.409 [2024-11-20 06:13:44.007527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:24.409 [2024-11-20 06:13:44.007541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:24.409 [2024-11-20 06:13:44.007553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.409 [2024-11-20 06:13:44.007587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:24.409 [2024-11-20 06:13:44.010223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.409 [2024-11-20 06:13:44.010259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:24.409 [2024-11-20 06:13:44.010272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:15:24.409 [2024-11-20 06:13:44.010280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.409 [2024-11-20 06:13:44.010739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.409 [2024-11-20 06:13:44.010768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:24.409 [2024-11-20 06:13:44.010783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:15:24.409 [2024-11-20 06:13:44.010794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.409 [2024-11-20 06:13:44.014034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.410 [2024-11-20 06:13:44.014056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:24.410 [2024-11-20 06:13:44.014067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:15:24.410 [2024-11-20 06:13:44.014076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.410 [2024-11-20 06:13:44.020279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.410 [2024-11-20 06:13:44.020308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:24.410 [2024-11-20 06:13:44.020321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:15:24.410 [2024-11-20 06:13:44.020328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.043672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.043723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:24.669 [2024-11-20 06:13:44.043737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.261 ms 00:15:24.669 [2024-11-20 06:13:44.043744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.057847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.057888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:24.669 [2024-11-20 06:13:44.057905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.038 ms 00:15:24.669 [2024-11-20 06:13:44.057914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.058106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.058118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:24.669 [2024-11-20 06:13:44.058128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:15:24.669 [2024-11-20 06:13:44.058135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.081144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.081179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:24.669 [2024-11-20 06:13:44.081192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.978 ms 00:15:24.669 [2024-11-20 06:13:44.081199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.104030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.104069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:24.669 [2024-11-20 06:13:44.104082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:15:24.669 [2024-11-20 06:13:44.104089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.126344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.126381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:24.669 [2024-11-20 06:13:44.126393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.211 ms 00:15:24.669 [2024-11-20 06:13:44.126400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.148880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.669 [2024-11-20 06:13:44.148931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:24.669 [2024-11-20 06:13:44.148944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:15:24.669 [2024-11-20 06:13:44.148952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.669 [2024-11-20 06:13:44.148998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:24.669 [2024-11-20 06:13:44.149013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:24.669 [2024-11-20 06:13:44.149237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:24.670 [2024-11-20 06:13:44.149896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:24.670 [2024-11-20 06:13:44.149905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 084f459b-ab3a-4d01-8976-08c46ae2a22b 00:15:24.670 [2024-11-20 06:13:44.149913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:24.670 [2024-11-20 06:13:44.149923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:24.670 [2024-11-20 06:13:44.149932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:24.670 [2024-11-20 06:13:44.149941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:24.670 [2024-11-20 06:13:44.149948] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:24.670 [2024-11-20 06:13:44.149957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:24.670 [2024-11-20 06:13:44.149964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:24.670 [2024-11-20 06:13:44.149972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:24.670 [2024-11-20 06:13:44.149978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:24.670 [2024-11-20 06:13:44.149987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.670 [2024-11-20 06:13:44.149995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:24.670 [2024-11-20 06:13:44.150005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:15:24.670 [2024-11-20 06:13:44.150012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.670 [2024-11-20 06:13:44.162532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.670 [2024-11-20 06:13:44.162565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:24.670 [2024-11-20 06:13:44.162577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.467 ms 00:15:24.670 [2024-11-20 06:13:44.162584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.670 [2024-11-20 06:13:44.162959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.671 [2024-11-20 06:13:44.162979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:24.671 [2024-11-20 06:13:44.162989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:15:24.671 [2024-11-20 06:13:44.162997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.671 [2024-11-20 06:13:44.206839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.671 [2024-11-20 06:13:44.206878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:24.671 [2024-11-20 06:13:44.206891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.671 [2024-11-20 06:13:44.206899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.671 [2024-11-20 06:13:44.206971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.671 [2024-11-20 06:13:44.206980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:24.671 [2024-11-20 06:13:44.206990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.671 [2024-11-20 06:13:44.206997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.671 [2024-11-20 06:13:44.207084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.671 [2024-11-20 06:13:44.207096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:24.671 [2024-11-20 06:13:44.207106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.671 [2024-11-20 06:13:44.207113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.671 [2024-11-20 06:13:44.207140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.671 [2024-11-20 06:13:44.207152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:24.671 [2024-11-20 06:13:44.207161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.671 [2024-11-20 06:13:44.207168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.671 [2024-11-20 06:13:44.290393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.671 [2024-11-20 06:13:44.290450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:24.671 [2024-11-20 06:13:44.290465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.671 [2024-11-20 06:13:44.290472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.353981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:24.929 [2024-11-20 06:13:44.354042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:24.929 [2024-11-20 06:13:44.354162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:24.929 [2024-11-20 06:13:44.354249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:24.929 [2024-11-20 06:13:44.354379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:24.929 [2024-11-20 06:13:44.354449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:24.929 [2024-11-20 06:13:44.354526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:24.929 [2024-11-20 06:13:44.354607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:24.929 [2024-11-20 06:13:44.354616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:24.929 [2024-11-20 06:13:44.354624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.929 [2024-11-20 06:13:44.354785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.307 ms, result 0 00:15:24.929 true 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72542 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72542 ']' 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72542 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72542 00:15:24.929 killing process with pid 72542 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72542' 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72542 00:15:24.929 06:13:44 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72542 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.485 06:13:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.486 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:31.486 fio-3.35 00:15:31.486 Starting 1 thread 00:15:35.670 00:15:35.670 test: (groupid=0, jobs=1): err= 0: pid=72726: Wed Nov 20 06:13:54 2024 00:15:35.670 read: IOPS=1381, BW=91.8MiB/s (96.2MB/s)(255MiB/2774msec) 00:15:35.670 slat (nsec): min=3797, max=23751, avg=4766.82, stdev=1992.99 00:15:35.670 clat (usec): min=227, max=916, avg=327.02, stdev=42.66 00:15:35.670 lat (usec): min=231, max=926, avg=331.79, stdev=43.55 00:15:35.670 clat percentiles (usec): 00:15:35.670 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:15:35.670 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 322], 00:15:35.670 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 424], 00:15:35.670 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 725], 99.95th=[ 766], 00:15:35.670 | 99.99th=[ 914] 00:15:35.670 write: IOPS=1392, BW=92.5MiB/s (97.0MB/s)(256MiB/2769msec); 0 zone resets 00:15:35.670 slat (nsec): min=17394, max=77079, avg=20161.26, stdev=3167.91 00:15:35.670 clat (usec): min=275, max=1053, avg=355.78, stdev=56.39 00:15:35.670 lat (usec): min=301, max=1073, avg=375.94, stdev=56.69 00:15:35.670 clat percentiles (usec): 00:15:35.670 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:15:35.670 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 347], 00:15:35.670 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 441], 00:15:35.670 | 99.00th=[ 619], 99.50th=[ 676], 99.90th=[ 799], 99.95th=[ 963], 00:15:35.670 | 99.99th=[ 1057] 00:15:35.670 bw ( KiB/s): min=91936, max=99688, per=100.00%, avg=95308.80, stdev=4025.69, samples=5 00:15:35.670 iops : min= 1352, max= 1466, avg=1401.60, stdev=59.20, samples=5 00:15:35.670 lat (usec) : 250=0.22%, 500=97.66%, 750=1.98%, 1000=0.13% 00:15:35.670 lat (msec) : 2=0.01% 00:15:35.670 cpu : usr=99.31%, sys=0.04%, ctx=6, majf=0, minf=1169 00:15:35.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:35.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.670 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:35.670 00:15:35.670 Run status group 0 (all jobs): 00:15:35.670 READ: bw=91.8MiB/s (96.2MB/s), 91.8MiB/s-91.8MiB/s (96.2MB/s-96.2MB/s), io=255MiB (267MB), run=2774-2774msec 00:15:35.670 WRITE: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=256MiB (269MB), run=2769-2769msec 00:15:37.054 ----------------------------------------------------- 00:15:37.054 Suppressions used: 00:15:37.054 count bytes template 00:15:37.054 1 5 /usr/src/fio/parse.c 00:15:37.054 1 8 libtcmalloc_minimal.so 00:15:37.054 1 904 libcrypto.so 00:15:37.054 ----------------------------------------------------- 00:15:37.054 00:15:37.054 06:13:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.055 06:13:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:37.055 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:37.055 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:37.055 fio-3.35 00:15:37.055 Starting 2 threads 00:16:03.587 00:16:03.587 first_half: (groupid=0, jobs=1): err= 0: pid=72819: Wed Nov 20 06:14:21 2024 00:16:03.587 read: IOPS=2777, BW=10.9MiB/s (11.4MB/s)(255MiB/23538msec) 00:16:03.587 slat (usec): min=3, max=435, avg= 4.46, stdev= 2.34 00:16:03.587 clat (usec): min=604, max=293834, avg=36229.50, stdev=21249.42 00:16:03.587 lat (usec): min=609, max=293839, avg=36233.96, stdev=21249.53 00:16:03.587 clat percentiles (msec): 00:16:03.587 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:16:03.587 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:16:03.587 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 52], 00:16:03.587 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 253], 99.95th=[ 271], 00:16:03.587 | 99.99th=[ 288] 00:16:03.587 write: IOPS=2952, BW=11.5MiB/s (12.1MB/s)(256MiB/22194msec); 0 zone resets 00:16:03.587 slat (usec): min=3, max=2667, avg= 6.40, stdev=12.93 00:16:03.587 clat (usec): min=360, max=125539, avg=9801.90, stdev=16981.22 00:16:03.587 lat (usec): min=369, max=125545, avg=9808.30, stdev=16981.52 00:16:03.587 clat percentiles (usec): 00:16:03.587 | 1.00th=[ 652], 5.00th=[ 889], 10.00th=[ 1139], 20.00th=[ 2540], 00:16:03.587 | 30.00th=[ 3589], 40.00th=[ 4490], 50.00th=[ 5145], 60.00th=[ 5669], 00:16:03.587 | 70.00th=[ 7046], 80.00th=[ 10290], 90.00th=[ 13698], 95.00th=[ 52691], 00:16:03.587 | 99.00th=[ 96994], 99.50th=[105382], 99.90th=[120062], 99.95th=[122160], 00:16:03.587 | 99.99th=[125305] 00:16:03.587 bw ( KiB/s): min= 2280, max=43496, per=100.00%, avg=23827.45, stdev=13570.20, samples=22 00:16:03.587 iops : min= 570, max=10874, avg=5956.86, stdev=3392.55, samples=22 00:16:03.587 lat (usec) : 500=0.03%, 750=1.15%, 1000=2.62% 00:16:03.587 lat (msec) : 2=4.55%, 4=8.72%, 10=22.86%, 20=6.74%, 50=47.87% 00:16:03.587 lat (msec) : 100=3.80%, 250=1.61%, 500=0.05% 00:16:03.587 cpu : usr=98.99%, sys=0.19%, ctx=63, majf=0, minf=5561 00:16:03.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:03.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.587 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.587 issued rwts: total=65383,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.587 second_half: (groupid=0, jobs=1): err= 0: pid=72820: Wed Nov 20 06:14:21 2024 00:16:03.587 read: IOPS=2759, BW=10.8MiB/s (11.3MB/s)(255MiB/23645msec) 00:16:03.587 slat (nsec): min=3106, max=85452, avg=5125.77, stdev=1301.86 00:16:03.587 clat (usec): min=630, max=285312, avg=36130.84, stdev=22601.99 00:16:03.587 lat (usec): min=636, max=285317, avg=36135.97, stdev=22602.12 00:16:03.587 clat percentiles (msec): 00:16:03.587 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:16:03.587 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:16:03.587 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 56], 00:16:03.587 | 99.00th=[ 150], 99.50th=[ 188], 99.90th=[ 249], 99.95th=[ 275], 00:16:03.587 | 99.99th=[ 279] 00:16:03.587 write: IOPS=3536, BW=13.8MiB/s (14.5MB/s)(256MiB/18533msec); 0 zone resets 00:16:03.587 slat (usec): min=4, max=2213, avg= 7.14, stdev=10.34 00:16:03.587 clat (usec): min=358, max=125642, avg=10185.57, stdev=17497.09 00:16:03.587 lat (usec): min=374, max=125648, avg=10192.71, stdev=17497.19 00:16:03.587 clat percentiles (usec): 00:16:03.587 | 1.00th=[ 644], 5.00th=[ 824], 10.00th=[ 955], 20.00th=[ 1221], 00:16:03.587 | 30.00th=[ 2343], 40.00th=[ 3884], 50.00th=[ 5014], 60.00th=[ 5932], 00:16:03.587 | 70.00th=[ 7504], 80.00th=[ 11076], 90.00th=[ 22676], 95.00th=[ 53216], 00:16:03.587 | 99.00th=[ 98042], 99.50th=[106431], 99.90th=[121111], 99.95th=[123208], 00:16:03.587 | 99.99th=[125305] 00:16:03.587 bw ( KiB/s): min= 976, max=45440, per=100.00%, avg=24966.10, stdev=12233.17, samples=21 00:16:03.587 iops : min= 244, max=11360, avg=6241.52, stdev=3058.29, samples=21 00:16:03.587 lat (usec) : 500=0.03%, 750=1.34%, 1000=4.61% 00:16:03.587 lat (msec) : 2=8.24%, 4=6.41%, 10=19.03%, 20=6.76%, 50=47.94% 00:16:03.587 lat (msec) : 100=3.70%, 250=1.90%, 500=0.05% 00:16:03.587 cpu : usr=99.29%, sys=0.11%, ctx=42, majf=0, minf=5550 00:16:03.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:03.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.587 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.587 issued rwts: total=65255,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.587 00:16:03.587 Run status group 0 (all jobs): 00:16:03.587 READ: bw=21.6MiB/s (22.6MB/s), 10.8MiB/s-10.9MiB/s (11.3MB/s-11.4MB/s), io=510MiB (535MB), run=23538-23645msec 00:16:03.587 WRITE: bw=23.1MiB/s (24.2MB/s), 11.5MiB/s-13.8MiB/s (12.1MB/s-14.5MB/s), io=512MiB (537MB), run=18533-22194msec 00:16:03.843 ----------------------------------------------------- 00:16:03.843 Suppressions used: 00:16:03.843 count bytes template 00:16:03.843 2 10 /usr/src/fio/parse.c 00:16:03.843 4 384 /usr/src/fio/iolog.c 00:16:03.843 1 8 libtcmalloc_minimal.so 00:16:03.843 1 904 libcrypto.so 00:16:03.843 ----------------------------------------------------- 00:16:03.843 00:16:03.843 06:14:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:03.843 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.843 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.098 06:14:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:04.098 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:04.098 fio-3.35 00:16:04.098 Starting 1 thread 00:16:18.985 00:16:18.985 test: (groupid=0, jobs=1): err= 0: pid=73140: Wed Nov 20 06:14:36 2024 00:16:18.985 read: IOPS=7926, BW=31.0MiB/s (32.5MB/s)(255MiB/8226msec) 00:16:18.985 slat (nsec): min=3037, max=63449, avg=3492.34, stdev=692.01 00:16:18.985 clat (usec): min=488, max=150005, avg=16141.01, stdev=5633.45 00:16:18.985 lat (usec): min=495, max=150009, avg=16144.50, stdev=5633.45 00:16:18.985 clat percentiles (msec): 00:16:18.985 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 15], 00:16:18.985 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 16], 00:16:18.985 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 18], 95.00th=[ 20], 00:16:18.985 | 99.00th=[ 24], 99.50th=[ 28], 99.90th=[ 117], 99.95th=[ 122], 00:16:18.985 | 99.99th=[ 125] 00:16:18.985 write: IOPS=16.2k, BW=63.4MiB/s (66.5MB/s)(256MiB/4035msec); 0 zone resets 00:16:18.985 slat (usec): min=4, max=168, avg= 6.56, stdev= 2.83 00:16:18.985 clat (usec): min=488, max=45821, avg=7834.77, stdev=9772.41 00:16:18.985 lat (usec): min=494, max=45826, avg=7841.33, stdev=9772.37 00:16:18.985 clat percentiles (usec): 00:16:18.985 | 1.00th=[ 619], 5.00th=[ 701], 10.00th=[ 783], 20.00th=[ 955], 00:16:18.985 | 30.00th=[ 1090], 40.00th=[ 1516], 50.00th=[ 5276], 60.00th=[ 5997], 00:16:18.985 | 70.00th=[ 6980], 80.00th=[ 8455], 90.00th=[28181], 95.00th=[30016], 00:16:18.985 | 99.00th=[34341], 99.50th=[35914], 99.90th=[39060], 99.95th=[40633], 00:16:18.985 | 99.99th=[44303] 00:16:18.985 bw ( KiB/s): min= 2688, max=87760, per=89.67%, avg=58254.22, stdev=23544.19, samples=9 00:16:18.985 iops : min= 672, max=21940, avg=14563.56, stdev=5886.05, samples=9 00:16:18.985 lat (usec) : 500=0.01%, 750=4.01%, 1000=8.01% 00:16:18.985 lat (msec) : 2=8.54%, 4=0.65%, 10=20.59%, 20=47.96%, 50=10.03% 00:16:18.985 lat (msec) : 100=0.10%, 250=0.10% 00:16:18.985 cpu : usr=99.16%, sys=0.16%, ctx=51, majf=0, minf=5565 00:16:18.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:18.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.985 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.985 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.985 00:16:18.985 Run status group 0 (all jobs): 00:16:18.985 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8226-8226msec 00:16:18.985 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=256MiB (268MB), run=4035-4035msec 00:16:19.549 ----------------------------------------------------- 00:16:19.549 Suppressions used: 00:16:19.549 count bytes template 00:16:19.549 1 5 /usr/src/fio/parse.c 00:16:19.549 2 192 /usr/src/fio/iolog.c 00:16:19.549 1 8 libtcmalloc_minimal.so 00:16:19.549 1 904 libcrypto.so 00:16:19.549 ----------------------------------------------------- 00:16:19.549 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:19.549 Remove shared memory files 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57182 /dev/shm/spdk_tgt_trace.pid71455 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:19.549 00:16:19.549 real 1m2.281s 00:16:19.549 user 2m17.903s 00:16:19.549 sys 0m2.589s 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:19.549 06:14:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.549 ************************************ 00:16:19.549 END TEST ftl_fio_basic 00:16:19.549 ************************************ 00:16:19.549 06:14:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:19.549 06:14:39 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:19.549 06:14:39 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:19.549 06:14:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:19.549 ************************************ 00:16:19.549 START TEST ftl_bdevperf 00:16:19.549 ************************************ 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:19.549 * Looking for test storage... 00:16:19.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.549 --rc genhtml_branch_coverage=1 00:16:19.549 --rc genhtml_function_coverage=1 00:16:19.549 --rc genhtml_legend=1 00:16:19.549 --rc geninfo_all_blocks=1 00:16:19.549 --rc geninfo_unexecuted_blocks=1 00:16:19.549 00:16:19.549 ' 00:16:19.549 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:19.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.549 --rc genhtml_branch_coverage=1 00:16:19.549 --rc genhtml_function_coverage=1 00:16:19.549 --rc genhtml_legend=1 00:16:19.550 --rc geninfo_all_blocks=1 00:16:19.550 --rc geninfo_unexecuted_blocks=1 00:16:19.550 00:16:19.550 ' 00:16:19.550 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.550 --rc genhtml_branch_coverage=1 00:16:19.550 --rc genhtml_function_coverage=1 00:16:19.550 --rc genhtml_legend=1 00:16:19.550 --rc geninfo_all_blocks=1 00:16:19.550 --rc geninfo_unexecuted_blocks=1 00:16:19.550 00:16:19.550 ' 00:16:19.550 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.550 --rc genhtml_branch_coverage=1 00:16:19.550 --rc genhtml_function_coverage=1 00:16:19.550 --rc genhtml_legend=1 00:16:19.550 --rc geninfo_all_blocks=1 00:16:19.550 --rc geninfo_unexecuted_blocks=1 00:16:19.550 00:16:19.550 ' 00:16:19.550 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:19.550 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:19.550 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73367 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73367 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73367 ']' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:19.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:19.805 06:14:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:19.805 [2024-11-20 06:14:39.251722] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:16:19.805 [2024-11-20 06:14:39.251842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:16:19.805 [2024-11-20 06:14:39.409111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.062 [2024-11-20 06:14:39.507194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:20.628 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:20.885 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:21.143 { 00:16:21.143 "name": "nvme0n1", 00:16:21.143 "aliases": [ 00:16:21.143 "38136224-e082-4eb9-be8f-89ce7286c805" 00:16:21.143 ], 00:16:21.143 "product_name": "NVMe disk", 00:16:21.143 "block_size": 4096, 00:16:21.143 "num_blocks": 1310720, 00:16:21.143 "uuid": "38136224-e082-4eb9-be8f-89ce7286c805", 00:16:21.143 "numa_id": -1, 00:16:21.143 "assigned_rate_limits": { 00:16:21.143 "rw_ios_per_sec": 0, 00:16:21.143 "rw_mbytes_per_sec": 0, 00:16:21.143 "r_mbytes_per_sec": 0, 00:16:21.143 "w_mbytes_per_sec": 0 00:16:21.143 }, 00:16:21.143 "claimed": true, 00:16:21.143 "claim_type": "read_many_write_one", 00:16:21.143 "zoned": false, 00:16:21.143 "supported_io_types": { 00:16:21.143 "read": true, 00:16:21.143 "write": true, 00:16:21.143 "unmap": true, 00:16:21.143 "flush": true, 00:16:21.143 "reset": true, 00:16:21.143 "nvme_admin": true, 00:16:21.143 "nvme_io": true, 00:16:21.143 "nvme_io_md": false, 00:16:21.143 "write_zeroes": true, 00:16:21.143 "zcopy": false, 00:16:21.143 "get_zone_info": false, 00:16:21.143 "zone_management": false, 00:16:21.143 "zone_append": false, 00:16:21.143 "compare": true, 00:16:21.143 "compare_and_write": false, 00:16:21.143 "abort": true, 00:16:21.143 "seek_hole": false, 00:16:21.143 "seek_data": false, 00:16:21.143 "copy": true, 00:16:21.143 "nvme_iov_md": false 00:16:21.143 }, 00:16:21.143 "driver_specific": { 00:16:21.143 "nvme": [ 00:16:21.143 { 00:16:21.143 "pci_address": "0000:00:11.0", 00:16:21.143 "trid": { 00:16:21.143 "trtype": "PCIe", 00:16:21.143 "traddr": "0000:00:11.0" 00:16:21.143 }, 00:16:21.143 "ctrlr_data": { 00:16:21.143 "cntlid": 0, 00:16:21.143 "vendor_id": "0x1b36", 00:16:21.143 "model_number": "QEMU NVMe Ctrl", 00:16:21.143 "serial_number": "12341", 00:16:21.143 "firmware_revision": "8.0.0", 00:16:21.143 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:21.143 "oacs": { 00:16:21.143 "security": 0, 00:16:21.143 "format": 1, 00:16:21.143 "firmware": 0, 00:16:21.143 "ns_manage": 1 00:16:21.143 }, 00:16:21.143 "multi_ctrlr": false, 00:16:21.143 "ana_reporting": false 00:16:21.143 }, 00:16:21.143 "vs": { 00:16:21.143 "nvme_version": "1.4" 00:16:21.143 }, 00:16:21.143 "ns_data": { 00:16:21.143 "id": 1, 00:16:21.143 "can_share": false 00:16:21.143 } 00:16:21.143 } 00:16:21.143 ], 00:16:21.143 "mp_policy": "active_passive" 00:16:21.143 } 00:16:21.143 } 00:16:21.143 ]' 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:21.143 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:21.401 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ec0e4b03-088f-4fde-85f5-7601425bd75f 00:16:21.401 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:21.401 06:14:40 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec0e4b03-088f-4fde-85f5-7601425bd75f 00:16:21.658 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:21.658 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3b86608d-81fa-43a4-99fc-38ec2589fbd1 00:16:21.658 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3b86608d-81fa-43a4-99fc-38ec2589fbd1 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:21.916 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:22.175 { 00:16:22.175 "name": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:22.175 "aliases": [ 00:16:22.175 "lvs/nvme0n1p0" 00:16:22.175 ], 00:16:22.175 "product_name": "Logical Volume", 00:16:22.175 "block_size": 4096, 00:16:22.175 "num_blocks": 26476544, 00:16:22.175 "uuid": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:22.175 "assigned_rate_limits": { 00:16:22.175 "rw_ios_per_sec": 0, 00:16:22.175 "rw_mbytes_per_sec": 0, 00:16:22.175 "r_mbytes_per_sec": 0, 00:16:22.175 "w_mbytes_per_sec": 0 00:16:22.175 }, 00:16:22.175 "claimed": false, 00:16:22.175 "zoned": false, 00:16:22.175 "supported_io_types": { 00:16:22.175 "read": true, 00:16:22.175 "write": true, 00:16:22.175 "unmap": true, 00:16:22.175 "flush": false, 00:16:22.175 "reset": true, 00:16:22.175 "nvme_admin": false, 00:16:22.175 "nvme_io": false, 00:16:22.175 "nvme_io_md": false, 00:16:22.175 "write_zeroes": true, 00:16:22.175 "zcopy": false, 00:16:22.175 "get_zone_info": false, 00:16:22.175 "zone_management": false, 00:16:22.175 "zone_append": false, 00:16:22.175 "compare": false, 00:16:22.175 "compare_and_write": false, 00:16:22.175 "abort": false, 00:16:22.175 "seek_hole": true, 00:16:22.175 "seek_data": true, 00:16:22.175 "copy": false, 00:16:22.175 "nvme_iov_md": false 00:16:22.175 }, 00:16:22.175 "driver_specific": { 00:16:22.175 "lvol": { 00:16:22.175 "lvol_store_uuid": "3b86608d-81fa-43a4-99fc-38ec2589fbd1", 00:16:22.175 "base_bdev": "nvme0n1", 00:16:22.175 "thin_provision": true, 00:16:22.175 "num_allocated_clusters": 0, 00:16:22.175 "snapshot": false, 00:16:22.175 "clone": false, 00:16:22.175 "esnap_clone": false 00:16:22.175 } 00:16:22.175 } 00:16:22.175 } 00:16:22.175 ]' 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:22.175 06:14:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:22.433 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:22.690 { 00:16:22.690 "name": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:22.690 "aliases": [ 00:16:22.690 "lvs/nvme0n1p0" 00:16:22.690 ], 00:16:22.690 "product_name": "Logical Volume", 00:16:22.690 "block_size": 4096, 00:16:22.690 "num_blocks": 26476544, 00:16:22.690 "uuid": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:22.690 "assigned_rate_limits": { 00:16:22.690 "rw_ios_per_sec": 0, 00:16:22.690 "rw_mbytes_per_sec": 0, 00:16:22.690 "r_mbytes_per_sec": 0, 00:16:22.690 "w_mbytes_per_sec": 0 00:16:22.690 }, 00:16:22.690 "claimed": false, 00:16:22.690 "zoned": false, 00:16:22.690 "supported_io_types": { 00:16:22.690 "read": true, 00:16:22.690 "write": true, 00:16:22.690 "unmap": true, 00:16:22.690 "flush": false, 00:16:22.690 "reset": true, 00:16:22.690 "nvme_admin": false, 00:16:22.690 "nvme_io": false, 00:16:22.690 "nvme_io_md": false, 00:16:22.690 "write_zeroes": true, 00:16:22.690 "zcopy": false, 00:16:22.690 "get_zone_info": false, 00:16:22.690 "zone_management": false, 00:16:22.690 "zone_append": false, 00:16:22.690 "compare": false, 00:16:22.690 "compare_and_write": false, 00:16:22.690 "abort": false, 00:16:22.690 "seek_hole": true, 00:16:22.690 "seek_data": true, 00:16:22.690 "copy": false, 00:16:22.690 "nvme_iov_md": false 00:16:22.690 }, 00:16:22.690 "driver_specific": { 00:16:22.690 "lvol": { 00:16:22.690 "lvol_store_uuid": "3b86608d-81fa-43a4-99fc-38ec2589fbd1", 00:16:22.690 "base_bdev": "nvme0n1", 00:16:22.690 "thin_provision": true, 00:16:22.690 "num_allocated_clusters": 0, 00:16:22.690 "snapshot": false, 00:16:22.690 "clone": false, 00:16:22.690 "esnap_clone": false 00:16:22.690 } 00:16:22.690 } 00:16:22.690 } 00:16:22.690 ]' 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:22.690 06:14:42 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:22.949 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f35bc12-0770-4537-9f95-7b7703ab61fe 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:23.207 { 00:16:23.207 "name": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:23.207 "aliases": [ 00:16:23.207 "lvs/nvme0n1p0" 00:16:23.207 ], 00:16:23.207 "product_name": "Logical Volume", 00:16:23.207 "block_size": 4096, 00:16:23.207 "num_blocks": 26476544, 00:16:23.207 "uuid": "3f35bc12-0770-4537-9f95-7b7703ab61fe", 00:16:23.207 "assigned_rate_limits": { 00:16:23.207 "rw_ios_per_sec": 0, 00:16:23.207 "rw_mbytes_per_sec": 0, 00:16:23.207 "r_mbytes_per_sec": 0, 00:16:23.207 "w_mbytes_per_sec": 0 00:16:23.207 }, 00:16:23.207 "claimed": false, 00:16:23.207 "zoned": false, 00:16:23.207 "supported_io_types": { 00:16:23.207 "read": true, 00:16:23.207 "write": true, 00:16:23.207 "unmap": true, 00:16:23.207 "flush": false, 00:16:23.207 "reset": true, 00:16:23.207 "nvme_admin": false, 00:16:23.207 "nvme_io": false, 00:16:23.207 "nvme_io_md": false, 00:16:23.207 "write_zeroes": true, 00:16:23.207 "zcopy": false, 00:16:23.207 "get_zone_info": false, 00:16:23.207 "zone_management": false, 00:16:23.207 "zone_append": false, 00:16:23.207 "compare": false, 00:16:23.207 "compare_and_write": false, 00:16:23.207 "abort": false, 00:16:23.207 "seek_hole": true, 00:16:23.207 "seek_data": true, 00:16:23.207 "copy": false, 00:16:23.207 "nvme_iov_md": false 00:16:23.207 }, 00:16:23.207 "driver_specific": { 00:16:23.207 "lvol": { 00:16:23.207 "lvol_store_uuid": "3b86608d-81fa-43a4-99fc-38ec2589fbd1", 00:16:23.207 "base_bdev": "nvme0n1", 00:16:23.207 "thin_provision": true, 00:16:23.207 "num_allocated_clusters": 0, 00:16:23.207 "snapshot": false, 00:16:23.207 "clone": false, 00:16:23.207 "esnap_clone": false 00:16:23.207 } 00:16:23.207 } 00:16:23.207 } 00:16:23.207 ]' 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:23.207 06:14:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3f35bc12-0770-4537-9f95-7b7703ab61fe -c nvc0n1p0 --l2p_dram_limit 20 00:16:23.465 [2024-11-20 06:14:42.960979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.465 [2024-11-20 06:14:42.961035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:23.465 [2024-11-20 06:14:42.961049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.465 [2024-11-20 06:14:42.961060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.465 [2024-11-20 06:14:42.961117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.465 [2024-11-20 06:14:42.961131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:23.465 [2024-11-20 06:14:42.961139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:23.465 [2024-11-20 06:14:42.961148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.961172] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:23.466 [2024-11-20 06:14:42.961934] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:23.466 [2024-11-20 06:14:42.961951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.961960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:23.466 [2024-11-20 06:14:42.961968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:16:23.466 [2024-11-20 06:14:42.961977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.962115] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a772c5f-b4c3-4b30-a0fd-2e98906fdf52 00:16:23.466 [2024-11-20 06:14:42.963181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.963215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:23.466 [2024-11-20 06:14:42.963227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:23.466 [2024-11-20 06:14:42.963236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.968436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.968464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:23.466 [2024-11-20 06:14:42.968475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:16:23.466 [2024-11-20 06:14:42.968482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.968578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.968588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:23.466 [2024-11-20 06:14:42.968600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:23.466 [2024-11-20 06:14:42.968608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.968654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.968663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:23.466 [2024-11-20 06:14:42.968672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:23.466 [2024-11-20 06:14:42.968681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.968702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:23.466 [2024-11-20 06:14:42.972269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.972301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:23.466 [2024-11-20 06:14:42.972310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:16:23.466 [2024-11-20 06:14:42.972322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.972349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.972359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:23.466 [2024-11-20 06:14:42.972367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:23.466 [2024-11-20 06:14:42.972376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.972396] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:23.466 [2024-11-20 06:14:42.972540] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:23.466 [2024-11-20 06:14:42.972552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:23.466 [2024-11-20 06:14:42.972564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:23.466 [2024-11-20 06:14:42.972573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:23.466 [2024-11-20 06:14:42.972583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:23.466 [2024-11-20 06:14:42.972591] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:23.466 [2024-11-20 06:14:42.972600] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:23.466 [2024-11-20 06:14:42.972607] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:23.466 [2024-11-20 06:14:42.972615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:23.466 [2024-11-20 06:14:42.972623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.972634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:23.466 [2024-11-20 06:14:42.972642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:16:23.466 [2024-11-20 06:14:42.972651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.972730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.466 [2024-11-20 06:14:42.972740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:23.466 [2024-11-20 06:14:42.972747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:23.466 [2024-11-20 06:14:42.972757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.466 [2024-11-20 06:14:42.972860] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:23.466 [2024-11-20 06:14:42.972873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:23.466 [2024-11-20 06:14:42.972882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:23.466 [2024-11-20 06:14:42.972892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.972899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:23.466 [2024-11-20 06:14:42.972908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.972915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:23.466 [2024-11-20 06:14:42.972923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:23.466 [2024-11-20 06:14:42.972930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:23.466 [2024-11-20 06:14:42.972938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:23.466 [2024-11-20 06:14:42.972944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:23.466 [2024-11-20 06:14:42.972953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:23.466 [2024-11-20 06:14:42.972960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:23.466 [2024-11-20 06:14:42.972974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:23.466 [2024-11-20 06:14:42.972983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:23.466 [2024-11-20 06:14:42.972997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:23.466 [2024-11-20 06:14:42.973012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:23.466 [2024-11-20 06:14:42.973033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:23.466 [2024-11-20 06:14:42.973056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:23.466 [2024-11-20 06:14:42.973076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:23.466 [2024-11-20 06:14:42.973098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:23.466 [2024-11-20 06:14:42.973122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:23.466 [2024-11-20 06:14:42.973136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:23.466 [2024-11-20 06:14:42.973143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:23.466 [2024-11-20 06:14:42.973151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:23.466 [2024-11-20 06:14:42.973164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:23.466 [2024-11-20 06:14:42.973170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:23.466 [2024-11-20 06:14:42.973178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:23.466 [2024-11-20 06:14:42.973192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:23.466 [2024-11-20 06:14:42.973198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:23.466 [2024-11-20 06:14:42.973214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:23.466 [2024-11-20 06:14:42.973223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:23.466 [2024-11-20 06:14:42.973230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.466 [2024-11-20 06:14:42.973241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:23.466 [2024-11-20 06:14:42.973248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:23.466 [2024-11-20 06:14:42.973256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:23.466 [2024-11-20 06:14:42.973262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:23.466 [2024-11-20 06:14:42.973270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:23.466 [2024-11-20 06:14:42.973277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:23.467 [2024-11-20 06:14:42.973288] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:23.467 [2024-11-20 06:14:42.973297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:23.467 [2024-11-20 06:14:42.973314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:23.467 [2024-11-20 06:14:42.973323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:23.467 [2024-11-20 06:14:42.973333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:23.467 [2024-11-20 06:14:42.973342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:23.467 [2024-11-20 06:14:42.973349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:23.467 [2024-11-20 06:14:42.973357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:23.467 [2024-11-20 06:14:42.973368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:23.467 [2024-11-20 06:14:42.973378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:23.467 [2024-11-20 06:14:42.973384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:23.467 [2024-11-20 06:14:42.973424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:23.467 [2024-11-20 06:14:42.973432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:23.467 [2024-11-20 06:14:42.973450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:23.467 [2024-11-20 06:14:42.973459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:23.467 [2024-11-20 06:14:42.973467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:23.467 [2024-11-20 06:14:42.973475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.467 [2024-11-20 06:14:42.973484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:23.467 [2024-11-20 06:14:42.973506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:16:23.467 [2024-11-20 06:14:42.973514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.467 [2024-11-20 06:14:42.973548] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:23.467 [2024-11-20 06:14:42.973558] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:26.770 [2024-11-20 06:14:45.671355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.671564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:26.770 [2024-11-20 06:14:45.671647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2697.788 ms 00:16:26.770 [2024-11-20 06:14:45.671683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.697481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.697641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:26.770 [2024-11-20 06:14:45.697701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.582 ms 00:16:26.770 [2024-11-20 06:14:45.697724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.697867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.697894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:26.770 [2024-11-20 06:14:45.697919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:26.770 [2024-11-20 06:14:45.697937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.740085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.740232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:26.770 [2024-11-20 06:14:45.740300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.099 ms 00:16:26.770 [2024-11-20 06:14:45.740325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.740379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.740405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:26.770 [2024-11-20 06:14:45.740427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:26.770 [2024-11-20 06:14:45.740445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.740839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.740930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:26.770 [2024-11-20 06:14:45.740984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:16:26.770 [2024-11-20 06:14:45.741105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.741233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.741323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:26.770 [2024-11-20 06:14:45.741353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:26.770 [2024-11-20 06:14:45.741372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.754571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.754684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:26.770 [2024-11-20 06:14:45.754759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.169 ms 00:16:26.770 [2024-11-20 06:14:45.754785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.766223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:26.770 [2024-11-20 06:14:45.771969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.772093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:26.770 [2024-11-20 06:14:45.772147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.100 ms 00:16:26.770 [2024-11-20 06:14:45.772174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.855765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.855919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:26.770 [2024-11-20 06:14:45.855979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.548 ms 00:16:26.770 [2024-11-20 06:14:45.856005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.856191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.856280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:26.770 [2024-11-20 06:14:45.856329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:26.770 [2024-11-20 06:14:45.856354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.881241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.881372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:26.770 [2024-11-20 06:14:45.881426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.829 ms 00:16:26.770 [2024-11-20 06:14:45.881452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.906411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.906560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:26.770 [2024-11-20 06:14:45.906630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.900 ms 00:16:26.770 [2024-11-20 06:14:45.906654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.907240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.907286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:26.770 [2024-11-20 06:14:45.907308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:16:26.770 [2024-11-20 06:14:45.907401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.770 [2024-11-20 06:14:45.980022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.770 [2024-11-20 06:14:45.980081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:26.771 [2024-11-20 06:14:45.980095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.583 ms 00:16:26.771 [2024-11-20 06:14:45.980105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.005358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.771 [2024-11-20 06:14:46.005407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:26.771 [2024-11-20 06:14:46.005423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.184 ms 00:16:26.771 [2024-11-20 06:14:46.005433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.029892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.771 [2024-11-20 06:14:46.029935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:26.771 [2024-11-20 06:14:46.029947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:16:26.771 [2024-11-20 06:14:46.029957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.054925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.771 [2024-11-20 06:14:46.055068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:26.771 [2024-11-20 06:14:46.055084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.934 ms 00:16:26.771 [2024-11-20 06:14:46.055093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.055125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.771 [2024-11-20 06:14:46.055138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:26.771 [2024-11-20 06:14:46.055147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:26.771 [2024-11-20 06:14:46.055157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.055229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.771 [2024-11-20 06:14:46.055241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:26.771 [2024-11-20 06:14:46.055249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:26.771 [2024-11-20 06:14:46.055258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.771 [2024-11-20 06:14:46.056123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3094.740 ms, result 0 00:16:26.771 { 00:16:26.771 "name": "ftl0", 00:16:26.771 "uuid": "0a772c5f-b4c3-4b30-a0fd-2e98906fdf52" 00:16:26.771 } 00:16:26.771 06:14:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:26.771 06:14:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:26.771 06:14:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:26.771 06:14:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:26.771 [2024-11-20 06:14:46.372445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:26.771 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:26.771 Zero copy mechanism will not be used. 00:16:26.771 Running I/O for 4 seconds... 00:16:29.091 976.00 IOPS, 64.81 MiB/s [2024-11-20T06:14:49.383Z] 1231.50 IOPS, 81.78 MiB/s [2024-11-20T06:14:50.765Z] 1187.33 IOPS, 78.85 MiB/s [2024-11-20T06:14:50.765Z] 1129.50 IOPS, 75.01 MiB/s 00:16:31.132 Latency(us) 00:16:31.132 [2024-11-20T06:14:50.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.132 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:31.132 ftl0 : 4.00 1129.25 74.99 0.00 0.00 935.78 182.74 46984.27 00:16:31.132 [2024-11-20T06:14:50.765Z] =================================================================================================================== 00:16:31.132 [2024-11-20T06:14:50.765Z] Total : 1129.25 74.99 0.00 0.00 935.78 182.74 46984.27 00:16:31.132 { 00:16:31.132 "results": [ 00:16:31.132 { 00:16:31.132 "job": "ftl0", 00:16:31.132 "core_mask": "0x1", 00:16:31.132 "workload": "randwrite", 00:16:31.132 "status": "finished", 00:16:31.132 "queue_depth": 1, 00:16:31.132 "io_size": 69632, 00:16:31.132 "runtime": 4.001772, 00:16:31.132 "iops": 1129.2497423641328, 00:16:31.132 "mibps": 74.98924070386819, 00:16:31.132 "io_failed": 0, 00:16:31.132 "io_timeout": 0, 00:16:31.132 "avg_latency_us": 935.7810761400582, 00:16:31.132 "min_latency_us": 182.74461538461537, 00:16:31.132 "max_latency_us": 46984.27076923077 00:16:31.132 } 00:16:31.132 ], 00:16:31.132 "core_count": 1 00:16:31.132 } 00:16:31.132 [2024-11-20 06:14:50.382816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:31.132 06:14:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:31.133 [2024-11-20 06:14:50.469699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.133 Running I/O for 4 seconds... 00:16:33.012 7709.00 IOPS, 30.11 MiB/s [2024-11-20T06:14:53.582Z] 7983.50 IOPS, 31.19 MiB/s [2024-11-20T06:14:54.523Z] 8026.00 IOPS, 31.35 MiB/s [2024-11-20T06:14:54.523Z] 7549.25 IOPS, 29.49 MiB/s 00:16:34.890 Latency(us) 00:16:34.890 [2024-11-20T06:14:54.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.891 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.891 ftl0 : 4.03 7516.98 29.36 0.00 0.00 16949.43 261.51 48194.17 00:16:34.891 [2024-11-20T06:14:54.524Z] =================================================================================================================== 00:16:34.891 [2024-11-20T06:14:54.524Z] Total : 7516.98 29.36 0.00 0.00 16949.43 0.00 48194.17 00:16:34.891 [2024-11-20 06:14:54.516573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:34.891 { 00:16:34.891 "results": [ 00:16:34.891 { 00:16:34.891 "job": "ftl0", 00:16:34.891 "core_mask": "0x1", 00:16:34.891 "workload": "randwrite", 00:16:34.891 "status": "finished", 00:16:34.891 "queue_depth": 128, 00:16:34.891 "io_size": 4096, 00:16:34.891 "runtime": 4.034067, 00:16:34.891 "iops": 7516.979762606818, 00:16:34.891 "mibps": 29.363202197682885, 00:16:34.891 "io_failed": 0, 00:16:34.891 "io_timeout": 0, 00:16:34.891 "avg_latency_us": 16949.43311964121, 00:16:34.891 "min_latency_us": 261.51384615384615, 00:16:34.891 "max_latency_us": 48194.166153846156 00:16:34.891 } 00:16:34.891 ], 00:16:34.891 "core_count": 1 00:16:34.891 } 00:16:35.151 06:14:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:35.151 [2024-11-20 06:14:54.628533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.151 Running I/O for 4 seconds... 00:16:37.034 6886.00 IOPS, 26.90 MiB/s [2024-11-20T06:14:58.052Z] 5700.50 IOPS, 22.27 MiB/s [2024-11-20T06:14:58.996Z] 5406.00 IOPS, 21.12 MiB/s [2024-11-20T06:14:58.996Z] 5169.50 IOPS, 20.19 MiB/s 00:16:39.363 Latency(us) 00:16:39.363 [2024-11-20T06:14:58.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.363 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.363 Verification LBA range: start 0x0 length 0x1400000 00:16:39.363 ftl0 : 4.02 5181.24 20.24 0.00 0.00 24639.46 226.86 111310.38 00:16:39.363 [2024-11-20T06:14:58.996Z] =================================================================================================================== 00:16:39.363 [2024-11-20T06:14:58.996Z] Total : 5181.24 20.24 0.00 0.00 24639.46 0.00 111310.38 00:16:39.363 [2024-11-20 06:14:58.658539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:39.363 { 00:16:39.363 "results": [ 00:16:39.363 { 00:16:39.363 "job": "ftl0", 00:16:39.363 "core_mask": "0x1", 00:16:39.363 "workload": "verify", 00:16:39.363 "status": "finished", 00:16:39.363 "verify_range": { 00:16:39.363 "start": 0, 00:16:39.363 "length": 20971520 00:16:39.363 }, 00:16:39.363 "queue_depth": 128, 00:16:39.363 "io_size": 4096, 00:16:39.363 "runtime": 4.01506, 00:16:39.363 "iops": 5181.24262152994, 00:16:39.363 "mibps": 20.23922899035133, 00:16:39.363 "io_failed": 0, 00:16:39.363 "io_timeout": 0, 00:16:39.363 "avg_latency_us": 24639.463017094426, 00:16:39.363 "min_latency_us": 226.85538461538462, 00:16:39.364 "max_latency_us": 111310.37538461538 00:16:39.364 } 00:16:39.364 ], 00:16:39.364 "core_count": 1 00:16:39.364 } 00:16:39.364 06:14:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:39.364 [2024-11-20 06:14:58.864816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.364 [2024-11-20 06:14:58.864871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:39.364 [2024-11-20 06:14:58.864885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.364 [2024-11-20 06:14:58.864895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.364 [2024-11-20 06:14:58.864917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.364 [2024-11-20 06:14:58.867565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.364 [2024-11-20 06:14:58.867595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:39.364 [2024-11-20 06:14:58.867609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:16:39.364 [2024-11-20 06:14:58.867617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.364 [2024-11-20 06:14:58.870554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.364 [2024-11-20 06:14:58.870586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:39.364 [2024-11-20 06:14:58.870599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:16:39.364 [2024-11-20 06:14:58.870612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.077126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.077187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:39.636 [2024-11-20 06:14:59.077205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.486 ms 00:16:39.636 [2024-11-20 06:14:59.077215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.084338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.084395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:39.636 [2024-11-20 06:14:59.084417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.081 ms 00:16:39.636 [2024-11-20 06:14:59.084432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.111183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.111225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:39.636 [2024-11-20 06:14:59.111241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:16:39.636 [2024-11-20 06:14:59.111249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.125885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.125925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:39.636 [2024-11-20 06:14:59.125940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:16:39.636 [2024-11-20 06:14:59.125948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.126089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.126100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:39.636 [2024-11-20 06:14:59.126113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:16:39.636 [2024-11-20 06:14:59.126120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.150292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.150434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:39.636 [2024-11-20 06:14:59.150455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:16:39.636 [2024-11-20 06:14:59.150462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.174377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.174411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:39.636 [2024-11-20 06:14:59.174424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.884 ms 00:16:39.636 [2024-11-20 06:14:59.174431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.197426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.197561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:39.636 [2024-11-20 06:14:59.197581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.960 ms 00:16:39.636 [2024-11-20 06:14:59.197588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.220684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.636 [2024-11-20 06:14:59.220790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:39.636 [2024-11-20 06:14:59.220850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.031 ms 00:16:39.636 [2024-11-20 06:14:59.220890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.636 [2024-11-20 06:14:59.220967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:39.636 [2024-11-20 06:14:59.221021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:39.636 [2024-11-20 06:14:59.221825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.221881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.221911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.221941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.221969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.222994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.223988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.224979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:39.637 [2024-11-20 06:14:59.225285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:39.637 [2024-11-20 06:14:59.225306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a772c5f-b4c3-4b30-a0fd-2e98906fdf52 00:16:39.637 [2024-11-20 06:14:59.225375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:39.637 [2024-11-20 06:14:59.225400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:39.637 [2024-11-20 06:14:59.225419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:39.637 [2024-11-20 06:14:59.225440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:39.637 [2024-11-20 06:14:59.225459] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:39.637 [2024-11-20 06:14:59.225479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:39.637 [2024-11-20 06:14:59.225604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:39.637 [2024-11-20 06:14:59.225642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:39.637 [2024-11-20 06:14:59.225660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:39.637 [2024-11-20 06:14:59.225681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.637 [2024-11-20 06:14:59.225700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:39.637 [2024-11-20 06:14:59.225722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:16:39.637 [2024-11-20 06:14:59.225740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.637 [2024-11-20 06:14:59.238155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.637 [2024-11-20 06:14:59.238257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:39.637 [2024-11-20 06:14:59.238308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.317 ms 00:16:39.637 [2024-11-20 06:14:59.238330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.637 [2024-11-20 06:14:59.238723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.637 [2024-11-20 06:14:59.238769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:39.637 [2024-11-20 06:14:59.238986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:16:39.637 [2024-11-20 06:14:59.239009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.273707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.273816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.907 [2024-11-20 06:14:59.273837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.273846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.273910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.273919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.907 [2024-11-20 06:14:59.273930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.273938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.274016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.274027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.907 [2024-11-20 06:14:59.274038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.274046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.274064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.274072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.907 [2024-11-20 06:14:59.274082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.274090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.350089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.350132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.907 [2024-11-20 06:14:59.350147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.350155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.907 [2024-11-20 06:14:59.412281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.907 [2024-11-20 06:14:59.412402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.907 [2024-11-20 06:14:59.412467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.907 [2024-11-20 06:14:59.412603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:39.907 [2024-11-20 06:14:59.412658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.907 [2024-11-20 06:14:59.412737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.907 [2024-11-20 06:14:59.412801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.907 [2024-11-20 06:14:59.412811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.907 [2024-11-20 06:14:59.412818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.907 [2024-11-20 06:14:59.412936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.081 ms, result 0 00:16:39.907 true 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73367 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73367 ']' 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73367 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73367 00:16:39.907 killing process with pid 73367 00:16:39.907 Received shutdown signal, test time was about 4.000000 seconds 00:16:39.907 00:16:39.907 Latency(us) 00:16:39.907 [2024-11-20T06:14:59.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.907 [2024-11-20T06:14:59.540Z] =================================================================================================================== 00:16:39.907 [2024-11-20T06:14:59.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73367' 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73367 00:16:39.907 06:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73367 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:40.850 Remove shared memory files 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:40.850 ************************************ 00:16:40.850 END TEST ftl_bdevperf 00:16:40.850 ************************************ 00:16:40.850 00:16:40.850 real 0m21.144s 00:16:40.850 user 0m23.850s 00:16:40.850 sys 0m0.836s 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.850 06:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:40.850 06:15:00 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:40.850 06:15:00 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:40.850 06:15:00 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.850 06:15:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:40.850 ************************************ 00:16:40.850 START TEST ftl_trim 00:16:40.850 ************************************ 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:40.850 * Looking for test storage... 00:16:40.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.850 06:15:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:40.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.850 --rc genhtml_branch_coverage=1 00:16:40.850 --rc genhtml_function_coverage=1 00:16:40.850 --rc genhtml_legend=1 00:16:40.850 --rc geninfo_all_blocks=1 00:16:40.850 --rc geninfo_unexecuted_blocks=1 00:16:40.850 00:16:40.850 ' 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:40.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.850 --rc genhtml_branch_coverage=1 00:16:40.850 --rc genhtml_function_coverage=1 00:16:40.850 --rc genhtml_legend=1 00:16:40.850 --rc geninfo_all_blocks=1 00:16:40.850 --rc geninfo_unexecuted_blocks=1 00:16:40.850 00:16:40.850 ' 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:40.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.850 --rc genhtml_branch_coverage=1 00:16:40.850 --rc genhtml_function_coverage=1 00:16:40.850 --rc genhtml_legend=1 00:16:40.850 --rc geninfo_all_blocks=1 00:16:40.850 --rc geninfo_unexecuted_blocks=1 00:16:40.850 00:16:40.850 ' 00:16:40.850 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:40.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.851 --rc genhtml_branch_coverage=1 00:16:40.851 --rc genhtml_function_coverage=1 00:16:40.851 --rc genhtml_legend=1 00:16:40.851 --rc geninfo_all_blocks=1 00:16:40.851 --rc geninfo_unexecuted_blocks=1 00:16:40.851 00:16:40.851 ' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73708 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:40.851 06:15:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73708 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73708 ']' 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.851 06:15:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:41.112 [2024-11-20 06:15:00.519294] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:16:41.112 [2024-11-20 06:15:00.519643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73708 ] 00:16:41.112 [2024-11-20 06:15:00.682052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.373 [2024-11-20 06:15:00.789434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.373 [2024-11-20 06:15:00.789647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.373 [2024-11-20 06:15:00.789836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.945 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.945 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:41.945 06:15:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:42.206 06:15:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:42.206 06:15:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:42.206 06:15:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:42.206 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:42.206 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:42.206 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:42.206 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:42.206 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:42.467 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:42.467 { 00:16:42.467 "name": "nvme0n1", 00:16:42.467 "aliases": [ 00:16:42.467 "7dd57cd1-dea6-48ff-b8d3-f0ede17ade48" 00:16:42.467 ], 00:16:42.467 "product_name": "NVMe disk", 00:16:42.467 "block_size": 4096, 00:16:42.467 "num_blocks": 1310720, 00:16:42.467 "uuid": "7dd57cd1-dea6-48ff-b8d3-f0ede17ade48", 00:16:42.467 "numa_id": -1, 00:16:42.467 "assigned_rate_limits": { 00:16:42.467 "rw_ios_per_sec": 0, 00:16:42.467 "rw_mbytes_per_sec": 0, 00:16:42.467 "r_mbytes_per_sec": 0, 00:16:42.467 "w_mbytes_per_sec": 0 00:16:42.467 }, 00:16:42.467 "claimed": true, 00:16:42.467 "claim_type": "read_many_write_one", 00:16:42.467 "zoned": false, 00:16:42.467 "supported_io_types": { 00:16:42.467 "read": true, 00:16:42.467 "write": true, 00:16:42.467 "unmap": true, 00:16:42.467 "flush": true, 00:16:42.467 "reset": true, 00:16:42.467 "nvme_admin": true, 00:16:42.467 "nvme_io": true, 00:16:42.467 "nvme_io_md": false, 00:16:42.467 "write_zeroes": true, 00:16:42.467 "zcopy": false, 00:16:42.467 "get_zone_info": false, 00:16:42.467 "zone_management": false, 00:16:42.467 "zone_append": false, 00:16:42.467 "compare": true, 00:16:42.467 "compare_and_write": false, 00:16:42.467 "abort": true, 00:16:42.467 "seek_hole": false, 00:16:42.467 "seek_data": false, 00:16:42.467 "copy": true, 00:16:42.467 "nvme_iov_md": false 00:16:42.467 }, 00:16:42.467 "driver_specific": { 00:16:42.467 "nvme": [ 00:16:42.467 { 00:16:42.467 "pci_address": "0000:00:11.0", 00:16:42.467 "trid": { 00:16:42.467 "trtype": "PCIe", 00:16:42.467 "traddr": "0000:00:11.0" 00:16:42.467 }, 00:16:42.467 "ctrlr_data": { 00:16:42.467 "cntlid": 0, 00:16:42.467 "vendor_id": "0x1b36", 00:16:42.467 "model_number": "QEMU NVMe Ctrl", 00:16:42.467 "serial_number": "12341", 00:16:42.467 "firmware_revision": "8.0.0", 00:16:42.467 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:42.467 "oacs": { 00:16:42.467 "security": 0, 00:16:42.467 "format": 1, 00:16:42.467 "firmware": 0, 00:16:42.467 "ns_manage": 1 00:16:42.467 }, 00:16:42.467 "multi_ctrlr": false, 00:16:42.467 "ana_reporting": false 00:16:42.467 }, 00:16:42.467 "vs": { 00:16:42.467 "nvme_version": "1.4" 00:16:42.467 }, 00:16:42.467 "ns_data": { 00:16:42.467 "id": 1, 00:16:42.467 "can_share": false 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ], 00:16:42.467 "mp_policy": "active_passive" 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ]' 00:16:42.467 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:42.467 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:42.467 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:42.468 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:42.468 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:42.468 06:15:01 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:16:42.468 06:15:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:42.468 06:15:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:42.468 06:15:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:42.468 06:15:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:42.468 06:15:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.729 06:15:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3b86608d-81fa-43a4-99fc-38ec2589fbd1 00:16:42.729 06:15:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:42.729 06:15:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b86608d-81fa-43a4-99fc-38ec2589fbd1 00:16:42.990 06:15:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:42.990 06:15:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7507f9ff-cad8-417c-bf6b-5fd3e53b7de4 00:16:42.990 06:15:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7507f9ff-cad8-417c-bf6b-5fd3e53b7de4 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:43.250 06:15:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.250 06:15:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.250 06:15:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:43.250 06:15:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:43.250 06:15:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:43.250 06:15:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:43.510 { 00:16:43.510 "name": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:43.510 "aliases": [ 00:16:43.510 "lvs/nvme0n1p0" 00:16:43.510 ], 00:16:43.510 "product_name": "Logical Volume", 00:16:43.510 "block_size": 4096, 00:16:43.510 "num_blocks": 26476544, 00:16:43.510 "uuid": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:43.510 "assigned_rate_limits": { 00:16:43.510 "rw_ios_per_sec": 0, 00:16:43.510 "rw_mbytes_per_sec": 0, 00:16:43.510 "r_mbytes_per_sec": 0, 00:16:43.510 "w_mbytes_per_sec": 0 00:16:43.510 }, 00:16:43.510 "claimed": false, 00:16:43.510 "zoned": false, 00:16:43.510 "supported_io_types": { 00:16:43.510 "read": true, 00:16:43.510 "write": true, 00:16:43.510 "unmap": true, 00:16:43.510 "flush": false, 00:16:43.510 "reset": true, 00:16:43.510 "nvme_admin": false, 00:16:43.510 "nvme_io": false, 00:16:43.510 "nvme_io_md": false, 00:16:43.510 "write_zeroes": true, 00:16:43.510 "zcopy": false, 00:16:43.510 "get_zone_info": false, 00:16:43.510 "zone_management": false, 00:16:43.510 "zone_append": false, 00:16:43.510 "compare": false, 00:16:43.510 "compare_and_write": false, 00:16:43.510 "abort": false, 00:16:43.510 "seek_hole": true, 00:16:43.510 "seek_data": true, 00:16:43.510 "copy": false, 00:16:43.510 "nvme_iov_md": false 00:16:43.510 }, 00:16:43.510 "driver_specific": { 00:16:43.510 "lvol": { 00:16:43.510 "lvol_store_uuid": "7507f9ff-cad8-417c-bf6b-5fd3e53b7de4", 00:16:43.510 "base_bdev": "nvme0n1", 00:16:43.510 "thin_provision": true, 00:16:43.510 "num_allocated_clusters": 0, 00:16:43.510 "snapshot": false, 00:16:43.510 "clone": false, 00:16:43.510 "esnap_clone": false 00:16:43.510 } 00:16:43.510 } 00:16:43.510 } 00:16:43.510 ]' 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:43.510 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:43.510 06:15:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:43.510 06:15:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:43.510 06:15:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:43.771 06:15:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:43.771 06:15:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:43.771 06:15:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.771 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:43.771 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:43.771 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:43.771 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:43.771 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:44.062 { 00:16:44.062 "name": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:44.062 "aliases": [ 00:16:44.062 "lvs/nvme0n1p0" 00:16:44.062 ], 00:16:44.062 "product_name": "Logical Volume", 00:16:44.062 "block_size": 4096, 00:16:44.062 "num_blocks": 26476544, 00:16:44.062 "uuid": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:44.062 "assigned_rate_limits": { 00:16:44.062 "rw_ios_per_sec": 0, 00:16:44.062 "rw_mbytes_per_sec": 0, 00:16:44.062 "r_mbytes_per_sec": 0, 00:16:44.062 "w_mbytes_per_sec": 0 00:16:44.062 }, 00:16:44.062 "claimed": false, 00:16:44.062 "zoned": false, 00:16:44.062 "supported_io_types": { 00:16:44.062 "read": true, 00:16:44.062 "write": true, 00:16:44.062 "unmap": true, 00:16:44.062 "flush": false, 00:16:44.062 "reset": true, 00:16:44.062 "nvme_admin": false, 00:16:44.062 "nvme_io": false, 00:16:44.062 "nvme_io_md": false, 00:16:44.062 "write_zeroes": true, 00:16:44.062 "zcopy": false, 00:16:44.062 "get_zone_info": false, 00:16:44.062 "zone_management": false, 00:16:44.062 "zone_append": false, 00:16:44.062 "compare": false, 00:16:44.062 "compare_and_write": false, 00:16:44.062 "abort": false, 00:16:44.062 "seek_hole": true, 00:16:44.062 "seek_data": true, 00:16:44.062 "copy": false, 00:16:44.062 "nvme_iov_md": false 00:16:44.062 }, 00:16:44.062 "driver_specific": { 00:16:44.062 "lvol": { 00:16:44.062 "lvol_store_uuid": "7507f9ff-cad8-417c-bf6b-5fd3e53b7de4", 00:16:44.062 "base_bdev": "nvme0n1", 00:16:44.062 "thin_provision": true, 00:16:44.062 "num_allocated_clusters": 0, 00:16:44.062 "snapshot": false, 00:16:44.062 "clone": false, 00:16:44.062 "esnap_clone": false 00:16:44.062 } 00:16:44.062 } 00:16:44.062 } 00:16:44.062 ]' 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:44.062 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:44.062 06:15:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:44.062 06:15:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:44.331 06:15:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:44.331 06:15:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:44.331 06:15:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:44.331 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:44.331 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:44.331 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:44.331 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:44.331 06:15:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:44.592 { 00:16:44.592 "name": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:44.592 "aliases": [ 00:16:44.592 "lvs/nvme0n1p0" 00:16:44.592 ], 00:16:44.592 "product_name": "Logical Volume", 00:16:44.592 "block_size": 4096, 00:16:44.592 "num_blocks": 26476544, 00:16:44.592 "uuid": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:44.592 "assigned_rate_limits": { 00:16:44.592 "rw_ios_per_sec": 0, 00:16:44.592 "rw_mbytes_per_sec": 0, 00:16:44.592 "r_mbytes_per_sec": 0, 00:16:44.592 "w_mbytes_per_sec": 0 00:16:44.592 }, 00:16:44.592 "claimed": false, 00:16:44.592 "zoned": false, 00:16:44.592 "supported_io_types": { 00:16:44.592 "read": true, 00:16:44.592 "write": true, 00:16:44.592 "unmap": true, 00:16:44.592 "flush": false, 00:16:44.592 "reset": true, 00:16:44.592 "nvme_admin": false, 00:16:44.592 "nvme_io": false, 00:16:44.592 "nvme_io_md": false, 00:16:44.592 "write_zeroes": true, 00:16:44.592 "zcopy": false, 00:16:44.592 "get_zone_info": false, 00:16:44.592 "zone_management": false, 00:16:44.592 "zone_append": false, 00:16:44.592 "compare": false, 00:16:44.592 "compare_and_write": false, 00:16:44.592 "abort": false, 00:16:44.592 "seek_hole": true, 00:16:44.592 "seek_data": true, 00:16:44.592 "copy": false, 00:16:44.592 "nvme_iov_md": false 00:16:44.592 }, 00:16:44.592 "driver_specific": { 00:16:44.592 "lvol": { 00:16:44.592 "lvol_store_uuid": "7507f9ff-cad8-417c-bf6b-5fd3e53b7de4", 00:16:44.592 "base_bdev": "nvme0n1", 00:16:44.592 "thin_provision": true, 00:16:44.592 "num_allocated_clusters": 0, 00:16:44.592 "snapshot": false, 00:16:44.592 "clone": false, 00:16:44.592 "esnap_clone": false 00:16:44.592 } 00:16:44.592 } 00:16:44.592 } 00:16:44.592 ]' 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:44.592 06:15:04 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:44.592 06:15:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:44.592 06:15:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3052ccdb-9fe7-435a-b3ee-7ca3055572a9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:44.853 [2024-11-20 06:15:04.326478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.853 [2024-11-20 06:15:04.326535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.853 [2024-11-20 06:15:04.326554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.853 [2024-11-20 06:15:04.326562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.853 [2024-11-20 06:15:04.329703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.853 [2024-11-20 06:15:04.329816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.853 [2024-11-20 06:15:04.329879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:16:44.853 [2024-11-20 06:15:04.329903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.853 [2024-11-20 06:15:04.330282] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.853 [2024-11-20 06:15:04.331055] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.853 [2024-11-20 06:15:04.331157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.853 [2024-11-20 06:15:04.331257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.853 [2024-11-20 06:15:04.331287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:16:44.853 [2024-11-20 06:15:04.331308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.853 [2024-11-20 06:15:04.331442] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:16:44.853 [2024-11-20 06:15:04.332541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.853 [2024-11-20 06:15:04.332631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:44.854 [2024-11-20 06:15:04.332682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:44.854 [2024-11-20 06:15:04.332707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.337982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.338084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.854 [2024-11-20 06:15:04.338137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:16:44.854 [2024-11-20 06:15:04.338162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.338304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.338339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.854 [2024-11-20 06:15:04.338364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:44.854 [2024-11-20 06:15:04.338419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.338456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.338470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.854 [2024-11-20 06:15:04.338478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:44.854 [2024-11-20 06:15:04.338499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.338524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.854 [2024-11-20 06:15:04.342077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.342107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.854 [2024-11-20 06:15:04.342120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:16:44.854 [2024-11-20 06:15:04.342128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.342186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.342195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.854 [2024-11-20 06:15:04.342205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:44.854 [2024-11-20 06:15:04.342224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.342261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:44.854 [2024-11-20 06:15:04.342394] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.854 [2024-11-20 06:15:04.342412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.854 [2024-11-20 06:15:04.342424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.854 [2024-11-20 06:15:04.342435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342453] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.854 [2024-11-20 06:15:04.342460] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.854 [2024-11-20 06:15:04.342469] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.854 [2024-11-20 06:15:04.342477] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.854 [2024-11-20 06:15:04.342486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.342504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.854 [2024-11-20 06:15:04.342514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:16:44.854 [2024-11-20 06:15:04.342521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.342621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.854 [2024-11-20 06:15:04.342630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.854 [2024-11-20 06:15:04.342639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:44.854 [2024-11-20 06:15:04.342647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.854 [2024-11-20 06:15:04.342765] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.854 [2024-11-20 06:15:04.342776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.854 [2024-11-20 06:15:04.342785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.854 [2024-11-20 06:15:04.342809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.854 [2024-11-20 06:15:04.342833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.854 [2024-11-20 06:15:04.342848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.854 [2024-11-20 06:15:04.342855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.854 [2024-11-20 06:15:04.342862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.854 [2024-11-20 06:15:04.342869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.854 [2024-11-20 06:15:04.342877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.854 [2024-11-20 06:15:04.342884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.854 [2024-11-20 06:15:04.342900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.854 [2024-11-20 06:15:04.342924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.854 [2024-11-20 06:15:04.342945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.854 [2024-11-20 06:15:04.342968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.854 [2024-11-20 06:15:04.342983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.854 [2024-11-20 06:15:04.342989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.854 [2024-11-20 06:15:04.342997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.854 [2024-11-20 06:15:04.343003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.854 [2024-11-20 06:15:04.343014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.854 [2024-11-20 06:15:04.343020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.854 [2024-11-20 06:15:04.343028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.854 [2024-11-20 06:15:04.343035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.854 [2024-11-20 06:15:04.343043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.854 [2024-11-20 06:15:04.343050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.854 [2024-11-20 06:15:04.343058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.854 [2024-11-20 06:15:04.343065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.343072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.854 [2024-11-20 06:15:04.343079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.854 [2024-11-20 06:15:04.343087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.343094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.854 [2024-11-20 06:15:04.343103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.854 [2024-11-20 06:15:04.343110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.854 [2024-11-20 06:15:04.343118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.854 [2024-11-20 06:15:04.343130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.854 [2024-11-20 06:15:04.343142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.854 [2024-11-20 06:15:04.343149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.854 [2024-11-20 06:15:04.343157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.854 [2024-11-20 06:15:04.343164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.854 [2024-11-20 06:15:04.343172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.855 [2024-11-20 06:15:04.343183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.855 [2024-11-20 06:15:04.343194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.855 [2024-11-20 06:15:04.343214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.855 [2024-11-20 06:15:04.343221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.855 [2024-11-20 06:15:04.343230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.855 [2024-11-20 06:15:04.343237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.855 [2024-11-20 06:15:04.343245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.855 [2024-11-20 06:15:04.343253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.855 [2024-11-20 06:15:04.343261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.855 [2024-11-20 06:15:04.343269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.855 [2024-11-20 06:15:04.343279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.855 [2024-11-20 06:15:04.343317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.855 [2024-11-20 06:15:04.343330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.855 [2024-11-20 06:15:04.343347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.855 [2024-11-20 06:15:04.343354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.855 [2024-11-20 06:15:04.343362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.855 [2024-11-20 06:15:04.343370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.855 [2024-11-20 06:15:04.343378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.855 [2024-11-20 06:15:04.343386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:16:44.855 [2024-11-20 06:15:04.343394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.855 [2024-11-20 06:15:04.343451] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:44.855 [2024-11-20 06:15:04.343464] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:48.195 [2024-11-20 06:15:07.333980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.334190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:48.195 [2024-11-20 06:15:07.334211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2990.516 ms 00:16:48.195 [2024-11-20 06:15:07.334222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.359917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.359961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.195 [2024-11-20 06:15:07.359974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.456 ms 00:16:48.195 [2024-11-20 06:15:07.359984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.360127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.360140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:48.195 [2024-11-20 06:15:07.360149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:48.195 [2024-11-20 06:15:07.360160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.401318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.401358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.195 [2024-11-20 06:15:07.401371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.113 ms 00:16:48.195 [2024-11-20 06:15:07.401383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.401467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.401480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.195 [2024-11-20 06:15:07.401489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:48.195 [2024-11-20 06:15:07.401508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.401829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.401855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.195 [2024-11-20 06:15:07.401865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:48.195 [2024-11-20 06:15:07.401873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.401988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.402007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.195 [2024-11-20 06:15:07.402015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:48.195 [2024-11-20 06:15:07.402026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.418100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.418215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.195 [2024-11-20 06:15:07.418271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.038 ms 00:16:48.195 [2024-11-20 06:15:07.418296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.429664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:48.195 [2024-11-20 06:15:07.443815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.443926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:48.195 [2024-11-20 06:15:07.443979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.408 ms 00:16:48.195 [2024-11-20 06:15:07.444002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.531725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.531905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:48.195 [2024-11-20 06:15:07.531975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.641 ms 00:16:48.195 [2024-11-20 06:15:07.532000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.532243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.532300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:48.195 [2024-11-20 06:15:07.532351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:48.195 [2024-11-20 06:15:07.532371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.555775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.555890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:48.195 [2024-11-20 06:15:07.555944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:16:48.195 [2024-11-20 06:15:07.555967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.579099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.579200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:48.195 [2024-11-20 06:15:07.579266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:16:48.195 [2024-11-20 06:15:07.579286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.579894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.579981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:48.195 [2024-11-20 06:15:07.580030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:16:48.195 [2024-11-20 06:15:07.580051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.653170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.653314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:48.195 [2024-11-20 06:15:07.653373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.010 ms 00:16:48.195 [2024-11-20 06:15:07.653397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.686444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.686637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:48.195 [2024-11-20 06:15:07.686732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.540 ms 00:16:48.195 [2024-11-20 06:15:07.686769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.710240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.710388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:48.195 [2024-11-20 06:15:07.710450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.371 ms 00:16:48.195 [2024-11-20 06:15:07.710473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.733847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.733972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.195 [2024-11-20 06:15:07.734102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.296 ms 00:16:48.195 [2024-11-20 06:15:07.734146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.195 [2024-11-20 06:15:07.734288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.195 [2024-11-20 06:15:07.734372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.195 [2024-11-20 06:15:07.734424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:48.195 [2024-11-20 06:15:07.734447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.196 [2024-11-20 06:15:07.734545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.196 [2024-11-20 06:15:07.734617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.196 [2024-11-20 06:15:07.734662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:48.196 [2024-11-20 06:15:07.734681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.196 [2024-11-20 06:15:07.735534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.196 [2024-11-20 06:15:07.738571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3408.750 ms, result 0 00:16:48.196 [2024-11-20 06:15:07.739430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:48.196 { 00:16:48.196 "name": "ftl0", 00:16:48.196 "uuid": "174d453c-b9d5-4fbe-9e30-c11a2f569373" 00:16:48.196 } 00:16:48.196 06:15:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:48.196 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:48.454 06:15:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:48.712 [ 00:16:48.712 { 00:16:48.712 "name": "ftl0", 00:16:48.712 "aliases": [ 00:16:48.712 "174d453c-b9d5-4fbe-9e30-c11a2f569373" 00:16:48.712 ], 00:16:48.712 "product_name": "FTL disk", 00:16:48.712 "block_size": 4096, 00:16:48.712 "num_blocks": 23592960, 00:16:48.712 "uuid": "174d453c-b9d5-4fbe-9e30-c11a2f569373", 00:16:48.712 "assigned_rate_limits": { 00:16:48.712 "rw_ios_per_sec": 0, 00:16:48.712 "rw_mbytes_per_sec": 0, 00:16:48.712 "r_mbytes_per_sec": 0, 00:16:48.712 "w_mbytes_per_sec": 0 00:16:48.712 }, 00:16:48.712 "claimed": false, 00:16:48.712 "zoned": false, 00:16:48.712 "supported_io_types": { 00:16:48.712 "read": true, 00:16:48.712 "write": true, 00:16:48.712 "unmap": true, 00:16:48.712 "flush": true, 00:16:48.712 "reset": false, 00:16:48.712 "nvme_admin": false, 00:16:48.712 "nvme_io": false, 00:16:48.712 "nvme_io_md": false, 00:16:48.712 "write_zeroes": true, 00:16:48.712 "zcopy": false, 00:16:48.712 "get_zone_info": false, 00:16:48.712 "zone_management": false, 00:16:48.712 "zone_append": false, 00:16:48.712 "compare": false, 00:16:48.712 "compare_and_write": false, 00:16:48.712 "abort": false, 00:16:48.712 "seek_hole": false, 00:16:48.712 "seek_data": false, 00:16:48.712 "copy": false, 00:16:48.712 "nvme_iov_md": false 00:16:48.712 }, 00:16:48.712 "driver_specific": { 00:16:48.712 "ftl": { 00:16:48.712 "base_bdev": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:48.712 "cache": "nvc0n1p0" 00:16:48.712 } 00:16:48.712 } 00:16:48.712 } 00:16:48.712 ] 00:16:48.712 06:15:08 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:16:48.712 06:15:08 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:48.712 06:15:08 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:48.971 06:15:08 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:48.971 06:15:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:48.971 06:15:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:48.971 { 00:16:48.971 "name": "ftl0", 00:16:48.971 "aliases": [ 00:16:48.971 "174d453c-b9d5-4fbe-9e30-c11a2f569373" 00:16:48.971 ], 00:16:48.971 "product_name": "FTL disk", 00:16:48.971 "block_size": 4096, 00:16:48.971 "num_blocks": 23592960, 00:16:48.971 "uuid": "174d453c-b9d5-4fbe-9e30-c11a2f569373", 00:16:48.971 "assigned_rate_limits": { 00:16:48.971 "rw_ios_per_sec": 0, 00:16:48.971 "rw_mbytes_per_sec": 0, 00:16:48.971 "r_mbytes_per_sec": 0, 00:16:48.971 "w_mbytes_per_sec": 0 00:16:48.971 }, 00:16:48.971 "claimed": false, 00:16:48.971 "zoned": false, 00:16:48.971 "supported_io_types": { 00:16:48.971 "read": true, 00:16:48.971 "write": true, 00:16:48.971 "unmap": true, 00:16:48.971 "flush": true, 00:16:48.971 "reset": false, 00:16:48.971 "nvme_admin": false, 00:16:48.971 "nvme_io": false, 00:16:48.971 "nvme_io_md": false, 00:16:48.971 "write_zeroes": true, 00:16:48.971 "zcopy": false, 00:16:48.971 "get_zone_info": false, 00:16:48.971 "zone_management": false, 00:16:48.971 "zone_append": false, 00:16:48.971 "compare": false, 00:16:48.971 "compare_and_write": false, 00:16:48.971 "abort": false, 00:16:48.971 "seek_hole": false, 00:16:48.971 "seek_data": false, 00:16:48.971 "copy": false, 00:16:48.971 "nvme_iov_md": false 00:16:48.971 }, 00:16:48.971 "driver_specific": { 00:16:48.971 "ftl": { 00:16:48.971 "base_bdev": "3052ccdb-9fe7-435a-b3ee-7ca3055572a9", 00:16:48.971 "cache": "nvc0n1p0" 00:16:48.971 } 00:16:48.971 } 00:16:48.971 } 00:16:48.971 ]' 00:16:48.971 06:15:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:49.229 06:15:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:49.229 06:15:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:49.229 [2024-11-20 06:15:08.823058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.823257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.229 [2024-11-20 06:15:08.823279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.229 [2024-11-20 06:15:08.823292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.229 [2024-11-20 06:15:08.823330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.229 [2024-11-20 06:15:08.825901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.825932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.229 [2024-11-20 06:15:08.825947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:16:49.229 [2024-11-20 06:15:08.825955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.229 [2024-11-20 06:15:08.826432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.826547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.229 [2024-11-20 06:15:08.826564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:16:49.229 [2024-11-20 06:15:08.826572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.229 [2024-11-20 06:15:08.830210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.830295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.229 [2024-11-20 06:15:08.830310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.609 ms 00:16:49.229 [2024-11-20 06:15:08.830319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.229 [2024-11-20 06:15:08.837277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.837378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.229 [2024-11-20 06:15:08.837396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:16:49.229 [2024-11-20 06:15:08.837404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.229 [2024-11-20 06:15:08.860773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.229 [2024-11-20 06:15:08.860805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.230 [2024-11-20 06:15:08.860821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.297 ms 00:16:49.230 [2024-11-20 06:15:08.860828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.875360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.875395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.488 [2024-11-20 06:15:08.875409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.475 ms 00:16:49.488 [2024-11-20 06:15:08.875420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.875635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.875648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.488 [2024-11-20 06:15:08.875658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:49.488 [2024-11-20 06:15:08.875666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.898848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.898999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:49.488 [2024-11-20 06:15:08.899019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.154 ms 00:16:49.488 [2024-11-20 06:15:08.899027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.921829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.921956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:49.488 [2024-11-20 06:15:08.921977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.741 ms 00:16:49.488 [2024-11-20 06:15:08.921984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.944275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.944310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.488 [2024-11-20 06:15:08.944323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.231 ms 00:16:49.488 [2024-11-20 06:15:08.944332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.966241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.488 [2024-11-20 06:15:08.966272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.488 [2024-11-20 06:15:08.966285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.805 ms 00:16:49.488 [2024-11-20 06:15:08.966292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.488 [2024-11-20 06:15:08.966349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.488 [2024-11-20 06:15:08.966364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.488 [2024-11-20 06:15:08.966531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.966995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.489 [2024-11-20 06:15:08.967263] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.489 [2024-11-20 06:15:08.967274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:16:49.489 [2024-11-20 06:15:08.967282] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.489 [2024-11-20 06:15:08.967290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.489 [2024-11-20 06:15:08.967296] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.489 [2024-11-20 06:15:08.967308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.489 [2024-11-20 06:15:08.967315] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.489 [2024-11-20 06:15:08.967324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.489 [2024-11-20 06:15:08.967331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.490 [2024-11-20 06:15:08.967338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.490 [2024-11-20 06:15:08.967344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.490 [2024-11-20 06:15:08.967352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.490 [2024-11-20 06:15:08.967359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.490 [2024-11-20 06:15:08.967371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:16:49.490 [2024-11-20 06:15:08.967378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:08.979705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.490 [2024-11-20 06:15:08.979834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.490 [2024-11-20 06:15:08.979853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.297 ms 00:16:49.490 [2024-11-20 06:15:08.979861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:08.980246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.490 [2024-11-20 06:15:08.980257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.490 [2024-11-20 06:15:08.980268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:16:49.490 [2024-11-20 06:15:08.980275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:09.023595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.490 [2024-11-20 06:15:09.023641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.490 [2024-11-20 06:15:09.023655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.490 [2024-11-20 06:15:09.023664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:09.023771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.490 [2024-11-20 06:15:09.023782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.490 [2024-11-20 06:15:09.023791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.490 [2024-11-20 06:15:09.023799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:09.023858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.490 [2024-11-20 06:15:09.023868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.490 [2024-11-20 06:15:09.023881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.490 [2024-11-20 06:15:09.023889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:09.023914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.490 [2024-11-20 06:15:09.023922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.490 [2024-11-20 06:15:09.023932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.490 [2024-11-20 06:15:09.023939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.490 [2024-11-20 06:15:09.104469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.490 [2024-11-20 06:15:09.104521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.490 [2024-11-20 06:15:09.104535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.490 [2024-11-20 06:15:09.104544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.748 [2024-11-20 06:15:09.166219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.748 [2024-11-20 06:15:09.166338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.748 [2024-11-20 06:15:09.166419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.748 [2024-11-20 06:15:09.166583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.748 [2024-11-20 06:15:09.166659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.748 [2024-11-20 06:15:09.166731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.166814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.748 [2024-11-20 06:15:09.166824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.748 [2024-11-20 06:15:09.166834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.748 [2024-11-20 06:15:09.166842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.748 [2024-11-20 06:15:09.167010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.939 ms, result 0 00:16:49.748 true 00:16:49.748 06:15:09 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73708 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73708 ']' 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73708 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73708 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:49.748 killing process with pid 73708 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73708' 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73708 00:16:49.748 06:15:09 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73708 00:16:59.788 06:15:18 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:00.352 65536+0 records in 00:17:00.352 65536+0 records out 00:17:00.352 268435456 bytes (268 MB, 256 MiB) copied, 1.09148 s, 246 MB/s 00:17:00.352 06:15:19 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:00.352 [2024-11-20 06:15:19.879339] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:00.352 [2024-11-20 06:15:19.879461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73890 ] 00:17:00.609 [2024-11-20 06:15:20.034488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.609 [2024-11-20 06:15:20.137408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.867 [2024-11-20 06:15:20.391772] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.867 [2024-11-20 06:15:20.391835] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.126 [2024-11-20 06:15:20.545802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.545854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.126 [2024-11-20 06:15:20.545867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.126 [2024-11-20 06:15:20.545876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.548512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.548550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.126 [2024-11-20 06:15:20.548560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:17:01.126 [2024-11-20 06:15:20.548568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.548635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.126 [2024-11-20 06:15:20.549341] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.126 [2024-11-20 06:15:20.549367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.549375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.126 [2024-11-20 06:15:20.549384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:01.126 [2024-11-20 06:15:20.549392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.550848] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.126 [2024-11-20 06:15:20.563011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.563048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.126 [2024-11-20 06:15:20.563060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.165 ms 00:17:01.126 [2024-11-20 06:15:20.563069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.563153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.563165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.126 [2024-11-20 06:15:20.563174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:01.126 [2024-11-20 06:15:20.563182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.567783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.567811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.126 [2024-11-20 06:15:20.567820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:17:01.126 [2024-11-20 06:15:20.567828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.567909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.567920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.126 [2024-11-20 06:15:20.567928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:01.126 [2024-11-20 06:15:20.567936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.567963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.567975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.126 [2024-11-20 06:15:20.567983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:01.126 [2024-11-20 06:15:20.567991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.568012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.126 [2024-11-20 06:15:20.571407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.571433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.126 [2024-11-20 06:15:20.571442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:17:01.126 [2024-11-20 06:15:20.571450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.571485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.571505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.126 [2024-11-20 06:15:20.571514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:01.126 [2024-11-20 06:15:20.571521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.571539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.126 [2024-11-20 06:15:20.571559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:01.126 [2024-11-20 06:15:20.571594] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.126 [2024-11-20 06:15:20.571609] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:01.126 [2024-11-20 06:15:20.571711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.126 [2024-11-20 06:15:20.571728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.126 [2024-11-20 06:15:20.571739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.126 [2024-11-20 06:15:20.571749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.126 [2024-11-20 06:15:20.571761] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.126 [2024-11-20 06:15:20.571769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.126 [2024-11-20 06:15:20.571777] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.126 [2024-11-20 06:15:20.571784] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.126 [2024-11-20 06:15:20.571792] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.126 [2024-11-20 06:15:20.571799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.571807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.126 [2024-11-20 06:15:20.571815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:01.126 [2024-11-20 06:15:20.571821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.571913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.126 [2024-11-20 06:15:20.571930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.126 [2024-11-20 06:15:20.571938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:01.126 [2024-11-20 06:15:20.571945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.126 [2024-11-20 06:15:20.572056] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.126 [2024-11-20 06:15:20.572073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.126 [2024-11-20 06:15:20.572084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.126 [2024-11-20 06:15:20.572092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.126 [2024-11-20 06:15:20.572108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.126 [2024-11-20 06:15:20.572122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.126 [2024-11-20 06:15:20.572129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.126 [2024-11-20 06:15:20.572144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.126 [2024-11-20 06:15:20.572151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.126 [2024-11-20 06:15:20.572160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.126 [2024-11-20 06:15:20.572172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.126 [2024-11-20 06:15:20.572178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.126 [2024-11-20 06:15:20.572185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.126 [2024-11-20 06:15:20.572200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.126 [2024-11-20 06:15:20.572208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.126 [2024-11-20 06:15:20.572222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.126 [2024-11-20 06:15:20.572235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.126 [2024-11-20 06:15:20.572243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.126 [2024-11-20 06:15:20.572250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.127 [2024-11-20 06:15:20.572256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.127 [2024-11-20 06:15:20.572263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.127 [2024-11-20 06:15:20.572276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.127 [2024-11-20 06:15:20.572283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.127 [2024-11-20 06:15:20.572296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.127 [2024-11-20 06:15:20.572303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.127 [2024-11-20 06:15:20.572316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.127 [2024-11-20 06:15:20.572322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.127 [2024-11-20 06:15:20.572329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.127 [2024-11-20 06:15:20.572336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.127 [2024-11-20 06:15:20.572342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.127 [2024-11-20 06:15:20.572348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.127 [2024-11-20 06:15:20.572361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.127 [2024-11-20 06:15:20.572367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.127 [2024-11-20 06:15:20.572383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.127 [2024-11-20 06:15:20.572390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.127 [2024-11-20 06:15:20.572400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.127 [2024-11-20 06:15:20.572408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.127 [2024-11-20 06:15:20.572415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.127 [2024-11-20 06:15:20.572421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.127 [2024-11-20 06:15:20.572430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.127 [2024-11-20 06:15:20.572437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.127 [2024-11-20 06:15:20.572444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.127 [2024-11-20 06:15:20.572452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.127 [2024-11-20 06:15:20.572463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.127 [2024-11-20 06:15:20.572479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.127 [2024-11-20 06:15:20.572486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.127 [2024-11-20 06:15:20.572511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.127 [2024-11-20 06:15:20.572519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.127 [2024-11-20 06:15:20.572526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.127 [2024-11-20 06:15:20.572533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.127 [2024-11-20 06:15:20.572540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.127 [2024-11-20 06:15:20.572547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.127 [2024-11-20 06:15:20.572555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.127 [2024-11-20 06:15:20.572592] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.127 [2024-11-20 06:15:20.572600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.127 [2024-11-20 06:15:20.572614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.127 [2024-11-20 06:15:20.572622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.127 [2024-11-20 06:15:20.572629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.127 [2024-11-20 06:15:20.572636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.572646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.127 [2024-11-20 06:15:20.572656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:17:01.127 [2024-11-20 06:15:20.572663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.598306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.598336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.127 [2024-11-20 06:15:20.598346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.593 ms 00:17:01.127 [2024-11-20 06:15:20.598354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.598470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.598483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:01.127 [2024-11-20 06:15:20.598504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:01.127 [2024-11-20 06:15:20.598511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.639645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.639688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.127 [2024-11-20 06:15:20.639702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.112 ms 00:17:01.127 [2024-11-20 06:15:20.639712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.639811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.639823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.127 [2024-11-20 06:15:20.639832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.127 [2024-11-20 06:15:20.639839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.640149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.640191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.127 [2024-11-20 06:15:20.640200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:01.127 [2024-11-20 06:15:20.640212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.640334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.640350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.127 [2024-11-20 06:15:20.640358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:01.127 [2024-11-20 06:15:20.640366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.127 [2024-11-20 06:15:20.653470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.127 [2024-11-20 06:15:20.653512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.128 [2024-11-20 06:15:20.653522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.085 ms 00:17:01.128 [2024-11-20 06:15:20.653530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.128 [2024-11-20 06:15:20.665658] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:01.128 [2024-11-20 06:15:20.665693] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.128 [2024-11-20 06:15:20.665706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.128 [2024-11-20 06:15:20.665714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.128 [2024-11-20 06:15:20.665723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.076 ms 00:17:01.128 [2024-11-20 06:15:20.665730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.128 [2024-11-20 06:15:20.689905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.128 [2024-11-20 06:15:20.689940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.128 [2024-11-20 06:15:20.689959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.104 ms 00:17:01.128 [2024-11-20 06:15:20.689968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.128 [2024-11-20 06:15:20.701553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.128 [2024-11-20 06:15:20.701582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.128 [2024-11-20 06:15:20.701592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.509 ms 00:17:01.128 [2024-11-20 06:15:20.701599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.128 [2024-11-20 06:15:20.712636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.128 [2024-11-20 06:15:20.712664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.128 [2024-11-20 06:15:20.712674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.977 ms 00:17:01.128 [2024-11-20 06:15:20.712681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.128 [2024-11-20 06:15:20.713294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.128 [2024-11-20 06:15:20.713320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.128 [2024-11-20 06:15:20.713329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:17:01.128 [2024-11-20 06:15:20.713337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.387 [2024-11-20 06:15:20.768280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.387 [2024-11-20 06:15:20.768332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.387 [2024-11-20 06:15:20.768345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.919 ms 00:17:01.387 [2024-11-20 06:15:20.768353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.387 [2024-11-20 06:15:20.778673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:01.387 [2024-11-20 06:15:20.792045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.387 [2024-11-20 06:15:20.792080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:01.387 [2024-11-20 06:15:20.792092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.592 ms 00:17:01.387 [2024-11-20 06:15:20.792099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.387 [2024-11-20 06:15:20.792183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.387 [2024-11-20 06:15:20.792196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:01.387 [2024-11-20 06:15:20.792206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:01.387 [2024-11-20 06:15:20.792213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.387 [2024-11-20 06:15:20.792260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.387 [2024-11-20 06:15:20.792270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:01.387 [2024-11-20 06:15:20.792278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:01.387 [2024-11-20 06:15:20.792286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.387 [2024-11-20 06:15:20.792309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.388 [2024-11-20 06:15:20.792318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:01.388 [2024-11-20 06:15:20.792328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:01.388 [2024-11-20 06:15:20.792336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.388 [2024-11-20 06:15:20.792367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:01.388 [2024-11-20 06:15:20.792378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.388 [2024-11-20 06:15:20.792385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:01.388 [2024-11-20 06:15:20.792393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:01.388 [2024-11-20 06:15:20.792401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.388 [2024-11-20 06:15:20.815180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.388 [2024-11-20 06:15:20.815217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:01.388 [2024-11-20 06:15:20.815228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.760 ms 00:17:01.388 [2024-11-20 06:15:20.815236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.388 [2024-11-20 06:15:20.815323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.388 [2024-11-20 06:15:20.815334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:01.388 [2024-11-20 06:15:20.815343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:01.388 [2024-11-20 06:15:20.815350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.388 [2024-11-20 06:15:20.816401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:01.388 [2024-11-20 06:15:20.819475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.326 ms, result 0 00:17:01.388 [2024-11-20 06:15:20.820162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.388 [2024-11-20 06:15:20.833106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.329  [2024-11-20T06:15:22.895Z] Copying: 40/256 [MB] (40 MBps) [2024-11-20T06:15:24.268Z] Copying: 81/256 [MB] (40 MBps) [2024-11-20T06:15:25.201Z] Copying: 124/256 [MB] (42 MBps) [2024-11-20T06:15:26.136Z] Copying: 165/256 [MB] (41 MBps) [2024-11-20T06:15:27.070Z] Copying: 208/256 [MB] (42 MBps) [2024-11-20T06:15:27.329Z] Copying: 243/256 [MB] (34 MBps) [2024-11-20T06:15:27.329Z] Copying: 256/256 [MB] (average 40 MBps)[2024-11-20 06:15:27.153462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.696 [2024-11-20 06:15:27.162732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.696 [2024-11-20 06:15:27.162775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.696 [2024-11-20 06:15:27.162788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:07.696 [2024-11-20 06:15:27.162797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.696 [2024-11-20 06:15:27.162824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.696 [2024-11-20 06:15:27.165387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.696 [2024-11-20 06:15:27.165415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.696 [2024-11-20 06:15:27.165425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:17:07.696 [2024-11-20 06:15:27.165434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.696 [2024-11-20 06:15:27.166924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.696 [2024-11-20 06:15:27.166954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.696 [2024-11-20 06:15:27.166963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:17:07.696 [2024-11-20 06:15:27.166971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.696 [2024-11-20 06:15:27.174211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.696 [2024-11-20 06:15:27.174241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.696 [2024-11-20 06:15:27.174255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.223 ms 00:17:07.696 [2024-11-20 06:15:27.174263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.696 [2024-11-20 06:15:27.181179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.696 [2024-11-20 06:15:27.181208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:07.696 [2024-11-20 06:15:27.181218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:17:07.697 [2024-11-20 06:15:27.181227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.203922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.203958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.697 [2024-11-20 06:15:27.203970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.655 ms 00:17:07.697 [2024-11-20 06:15:27.203978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.217764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.217798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.697 [2024-11-20 06:15:27.217814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:17:07.697 [2024-11-20 06:15:27.217826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.217959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.217985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.697 [2024-11-20 06:15:27.217994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:07.697 [2024-11-20 06:15:27.218002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.241252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.241285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:07.697 [2024-11-20 06:15:27.241295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.234 ms 00:17:07.697 [2024-11-20 06:15:27.241303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.263712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.263744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:07.697 [2024-11-20 06:15:27.263754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.374 ms 00:17:07.697 [2024-11-20 06:15:27.263762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.285538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.285571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.697 [2024-11-20 06:15:27.285582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.739 ms 00:17:07.697 [2024-11-20 06:15:27.285589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.307455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.697 [2024-11-20 06:15:27.307487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.697 [2024-11-20 06:15:27.307513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.805 ms 00:17:07.697 [2024-11-20 06:15:27.307522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.697 [2024-11-20 06:15:27.307556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.697 [2024-11-20 06:15:27.307575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.307993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.697 [2024-11-20 06:15:27.308072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.698 [2024-11-20 06:15:27.308354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.698 [2024-11-20 06:15:27.308362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:07.698 [2024-11-20 06:15:27.308376] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.698 [2024-11-20 06:15:27.308384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.698 [2024-11-20 06:15:27.308391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.698 [2024-11-20 06:15:27.308399] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.698 [2024-11-20 06:15:27.308406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.698 [2024-11-20 06:15:27.308414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.698 [2024-11-20 06:15:27.308421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.698 [2024-11-20 06:15:27.308427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.698 [2024-11-20 06:15:27.308434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.698 [2024-11-20 06:15:27.308442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.698 [2024-11-20 06:15:27.308449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.698 [2024-11-20 06:15:27.308460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:17:07.698 [2024-11-20 06:15:27.308467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.698 [2024-11-20 06:15:27.320732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.698 [2024-11-20 06:15:27.320764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:07.698 [2024-11-20 06:15:27.320774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:17:07.698 [2024-11-20 06:15:27.320782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.698 [2024-11-20 06:15:27.321129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.698 [2024-11-20 06:15:27.321152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:07.698 [2024-11-20 06:15:27.321162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:07.698 [2024-11-20 06:15:27.321169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.355771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.355810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.956 [2024-11-20 06:15:27.355820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.355828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.355904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.355917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.956 [2024-11-20 06:15:27.355924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.355932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.355973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.355982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.956 [2024-11-20 06:15:27.355993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.356000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.356017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.356025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.956 [2024-11-20 06:15:27.356036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.356043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.434069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.434120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.956 [2024-11-20 06:15:27.434131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.434139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.956 [2024-11-20 06:15:27.498153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.956 [2024-11-20 06:15:27.498234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.956 [2024-11-20 06:15:27.498286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.956 [2024-11-20 06:15:27.498401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:07.956 [2024-11-20 06:15:27.498454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.956 [2024-11-20 06:15:27.498527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.956 [2024-11-20 06:15:27.498586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.956 [2024-11-20 06:15:27.498594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.956 [2024-11-20 06:15:27.498604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.956 [2024-11-20 06:15:27.498733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.993 ms, result 0 00:17:08.890 00:17:08.890 00:17:08.890 06:15:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73987 00:17:08.890 06:15:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73987 00:17:08.890 06:15:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73987 ']' 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.890 06:15:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:09.148 [2024-11-20 06:15:28.536351] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:09.148 [2024-11-20 06:15:28.536474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73987 ] 00:17:09.148 [2024-11-20 06:15:28.696026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.406 [2024-11-20 06:15:28.798246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.970 06:15:29 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:09.970 06:15:29 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:09.970 06:15:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:10.229 [2024-11-20 06:15:29.641562] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.229 [2024-11-20 06:15:29.641632] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.229 [2024-11-20 06:15:29.811741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.229 [2024-11-20 06:15:29.811796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.229 [2024-11-20 06:15:29.811813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.229 [2024-11-20 06:15:29.811821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.229 [2024-11-20 06:15:29.814459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.229 [2024-11-20 06:15:29.814506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.229 [2024-11-20 06:15:29.814518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:17:10.229 [2024-11-20 06:15:29.814526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.229 [2024-11-20 06:15:29.814978] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.229 [2024-11-20 06:15:29.815738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.229 [2024-11-20 06:15:29.815772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.229 [2024-11-20 06:15:29.815782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.229 [2024-11-20 06:15:29.815795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:17:10.229 [2024-11-20 06:15:29.815803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.229 [2024-11-20 06:15:29.817196] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.229 [2024-11-20 06:15:29.829764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.229 [2024-11-20 06:15:29.829804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.229 [2024-11-20 06:15:29.829817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.573 ms 00:17:10.230 [2024-11-20 06:15:29.829826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.829907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.829920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.230 [2024-11-20 06:15:29.829928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:10.230 [2024-11-20 06:15:29.829937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.835091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.835127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.230 [2024-11-20 06:15:29.835138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:17:10.230 [2024-11-20 06:15:29.835148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.835254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.835267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.230 [2024-11-20 06:15:29.835275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:10.230 [2024-11-20 06:15:29.835284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.835316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.835327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.230 [2024-11-20 06:15:29.835335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.230 [2024-11-20 06:15:29.835344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.835367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.230 [2024-11-20 06:15:29.838717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.838746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.230 [2024-11-20 06:15:29.838776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:17:10.230 [2024-11-20 06:15:29.838784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.838821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.838830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.230 [2024-11-20 06:15:29.838840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:10.230 [2024-11-20 06:15:29.838849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.838871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.230 [2024-11-20 06:15:29.838887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.230 [2024-11-20 06:15:29.838926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.230 [2024-11-20 06:15:29.838941] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.230 [2024-11-20 06:15:29.839045] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.230 [2024-11-20 06:15:29.839056] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.230 [2024-11-20 06:15:29.839072] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.230 [2024-11-20 06:15:29.839083] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839102] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.230 [2024-11-20 06:15:29.839111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.230 [2024-11-20 06:15:29.839118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.230 [2024-11-20 06:15:29.839129] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.230 [2024-11-20 06:15:29.839137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.839146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.230 [2024-11-20 06:15:29.839154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:10.230 [2024-11-20 06:15:29.839162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.839261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.230 [2024-11-20 06:15:29.839279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.230 [2024-11-20 06:15:29.839286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:10.230 [2024-11-20 06:15:29.839295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.230 [2024-11-20 06:15:29.839400] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.230 [2024-11-20 06:15:29.839418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.230 [2024-11-20 06:15:29.839426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.230 [2024-11-20 06:15:29.839453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.230 [2024-11-20 06:15:29.839479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.230 [2024-11-20 06:15:29.839507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.230 [2024-11-20 06:15:29.839515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.230 [2024-11-20 06:15:29.839522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.230 [2024-11-20 06:15:29.839530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.230 [2024-11-20 06:15:29.839536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.230 [2024-11-20 06:15:29.839545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.230 [2024-11-20 06:15:29.839559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.230 [2024-11-20 06:15:29.839595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.230 [2024-11-20 06:15:29.839619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.230 [2024-11-20 06:15:29.839642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.230 [2024-11-20 06:15:29.839665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.230 [2024-11-20 06:15:29.839688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.230 [2024-11-20 06:15:29.839703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.230 [2024-11-20 06:15:29.839711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.230 [2024-11-20 06:15:29.839717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.230 [2024-11-20 06:15:29.839724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.230 [2024-11-20 06:15:29.839731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.230 [2024-11-20 06:15:29.839740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.230 [2024-11-20 06:15:29.839755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.230 [2024-11-20 06:15:29.839762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839769] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.230 [2024-11-20 06:15:29.839779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.230 [2024-11-20 06:15:29.839787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.230 [2024-11-20 06:15:29.839803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.230 [2024-11-20 06:15:29.839810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.230 [2024-11-20 06:15:29.839817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.230 [2024-11-20 06:15:29.839826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.230 [2024-11-20 06:15:29.839834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.230 [2024-11-20 06:15:29.839840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.230 [2024-11-20 06:15:29.839850] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.230 [2024-11-20 06:15:29.839859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.230 [2024-11-20 06:15:29.839871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.230 [2024-11-20 06:15:29.839878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.231 [2024-11-20 06:15:29.839887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.231 [2024-11-20 06:15:29.839893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.231 [2024-11-20 06:15:29.839902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.231 [2024-11-20 06:15:29.839909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.231 [2024-11-20 06:15:29.839917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.231 [2024-11-20 06:15:29.839924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.231 [2024-11-20 06:15:29.839932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.231 [2024-11-20 06:15:29.839940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.839948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.839955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.839964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.839972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.231 [2024-11-20 06:15:29.839980] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.231 [2024-11-20 06:15:29.839989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.840000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.231 [2024-11-20 06:15:29.840007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.231 [2024-11-20 06:15:29.840015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.231 [2024-11-20 06:15:29.840022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.231 [2024-11-20 06:15:29.840031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.231 [2024-11-20 06:15:29.840039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.231 [2024-11-20 06:15:29.840048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:17:10.231 [2024-11-20 06:15:29.840055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.866507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.866548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.492 [2024-11-20 06:15:29.866561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.384 ms 00:17:10.492 [2024-11-20 06:15:29.866571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.866705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.866721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.492 [2024-11-20 06:15:29.866731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:10.492 [2024-11-20 06:15:29.866739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.897149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.897193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.492 [2024-11-20 06:15:29.897206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.377 ms 00:17:10.492 [2024-11-20 06:15:29.897214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.897292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.897302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.492 [2024-11-20 06:15:29.897312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:10.492 [2024-11-20 06:15:29.897320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.897662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.897683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.492 [2024-11-20 06:15:29.897697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:10.492 [2024-11-20 06:15:29.897704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.897832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.897847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.492 [2024-11-20 06:15:29.897857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:10.492 [2024-11-20 06:15:29.897864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.912393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.912429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.492 [2024-11-20 06:15:29.912440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:17:10.492 [2024-11-20 06:15:29.912449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.925301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:10.492 [2024-11-20 06:15:29.925336] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.492 [2024-11-20 06:15:29.925350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.925358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.492 [2024-11-20 06:15:29.925368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:17:10.492 [2024-11-20 06:15:29.925375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.949988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.950028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.492 [2024-11-20 06:15:29.950041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.531 ms 00:17:10.492 [2024-11-20 06:15:29.950051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.962618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.962666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.492 [2024-11-20 06:15:29.962679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.498 ms 00:17:10.492 [2024-11-20 06:15:29.962686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.974127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.974166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.492 [2024-11-20 06:15:29.974180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.355 ms 00:17:10.492 [2024-11-20 06:15:29.974188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:29.974888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:29.974918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.492 [2024-11-20 06:15:29.974929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:10.492 [2024-11-20 06:15:29.974937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.038214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.038279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.492 [2024-11-20 06:15:30.038298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.247 ms 00:17:10.492 [2024-11-20 06:15:30.038306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.048761] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.492 [2024-11-20 06:15:30.063424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.063470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.492 [2024-11-20 06:15:30.063487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.989 ms 00:17:10.492 [2024-11-20 06:15:30.063508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.063599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.063612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.492 [2024-11-20 06:15:30.063621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:10.492 [2024-11-20 06:15:30.063630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.063678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.063688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.492 [2024-11-20 06:15:30.063697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:10.492 [2024-11-20 06:15:30.063708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.063731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.063742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.492 [2024-11-20 06:15:30.063749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:10.492 [2024-11-20 06:15:30.063760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.063790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.492 [2024-11-20 06:15:30.063803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.063811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.492 [2024-11-20 06:15:30.063823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.492 [2024-11-20 06:15:30.063830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.087381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.087419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.492 [2024-11-20 06:15:30.087433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:17:10.492 [2024-11-20 06:15:30.087441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.087545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.492 [2024-11-20 06:15:30.087556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.492 [2024-11-20 06:15:30.087567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:10.492 [2024-11-20 06:15:30.087576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.492 [2024-11-20 06:15:30.088307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.492 [2024-11-20 06:15:30.091310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.299 ms, result 0 00:17:10.492 [2024-11-20 06:15:30.092433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.492 Some configs were skipped because the RPC state that can call them passed over. 00:17:10.752 06:15:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:10.752 [2024-11-20 06:15:30.323971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.752 [2024-11-20 06:15:30.324026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:10.752 [2024-11-20 06:15:30.324039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.746 ms 00:17:10.752 [2024-11-20 06:15:30.324049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.753 [2024-11-20 06:15:30.324085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.866 ms, result 0 00:17:10.753 true 00:17:10.753 06:15:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:11.013 [2024-11-20 06:15:30.524232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.013 [2024-11-20 06:15:30.524283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:11.013 [2024-11-20 06:15:30.524298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:17:11.013 [2024-11-20 06:15:30.524305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.013 [2024-11-20 06:15:30.524343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.864 ms, result 0 00:17:11.013 true 00:17:11.013 06:15:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73987 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73987 ']' 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73987 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73987 00:17:11.013 killing process with pid 73987 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73987' 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73987 00:17:11.013 06:15:30 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73987 00:17:11.956 [2024-11-20 06:15:31.257840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.956 [2024-11-20 06:15:31.257899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:11.956 [2024-11-20 06:15:31.257912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:11.956 [2024-11-20 06:15:31.257921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.956 [2024-11-20 06:15:31.257945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:11.956 [2024-11-20 06:15:31.260559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.956 [2024-11-20 06:15:31.260589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:11.956 [2024-11-20 06:15:31.260604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:17:11.956 [2024-11-20 06:15:31.260614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.956 [2024-11-20 06:15:31.260917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.956 [2024-11-20 06:15:31.260940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:11.956 [2024-11-20 06:15:31.260953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:11.956 [2024-11-20 06:15:31.260961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.956 [2024-11-20 06:15:31.265413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.265446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:11.957 [2024-11-20 06:15:31.265461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.427 ms 00:17:11.957 [2024-11-20 06:15:31.265470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.272569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.272602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:11.957 [2024-11-20 06:15:31.272615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.048 ms 00:17:11.957 [2024-11-20 06:15:31.272623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.282425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.282457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.957 [2024-11-20 06:15:31.282473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.741 ms 00:17:11.957 [2024-11-20 06:15:31.282488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.289156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.289191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:11.957 [2024-11-20 06:15:31.289204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.617 ms 00:17:11.957 [2024-11-20 06:15:31.289212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.289339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.289354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:11.957 [2024-11-20 06:15:31.289365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:11.957 [2024-11-20 06:15:31.289373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.299552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.299581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:11.957 [2024-11-20 06:15:31.299594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.157 ms 00:17:11.957 [2024-11-20 06:15:31.299602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.309175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.309205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:11.957 [2024-11-20 06:15:31.309219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.533 ms 00:17:11.957 [2024-11-20 06:15:31.309226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.318045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.318073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:11.957 [2024-11-20 06:15:31.318087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.780 ms 00:17:11.957 [2024-11-20 06:15:31.318095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.327061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.957 [2024-11-20 06:15:31.327097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:11.957 [2024-11-20 06:15:31.327111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.825 ms 00:17:11.957 [2024-11-20 06:15:31.327119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.957 [2024-11-20 06:15:31.327156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:11.957 [2024-11-20 06:15:31.327173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:11.957 [2024-11-20 06:15:31.327460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.327995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.328008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.328015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.328025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.328032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:11.958 [2024-11-20 06:15:31.328042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:11.959 [2024-11-20 06:15:31.328058] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:11.959 [2024-11-20 06:15:31.328071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:11.959 [2024-11-20 06:15:31.328085] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:11.959 [2024-11-20 06:15:31.328096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:11.959 [2024-11-20 06:15:31.328102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:11.959 [2024-11-20 06:15:31.328111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:11.959 [2024-11-20 06:15:31.328118] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:11.959 [2024-11-20 06:15:31.328126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:11.959 [2024-11-20 06:15:31.328133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:11.959 [2024-11-20 06:15:31.328143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:11.959 [2024-11-20 06:15:31.328149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:11.959 [2024-11-20 06:15:31.328157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.959 [2024-11-20 06:15:31.328165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:11.959 [2024-11-20 06:15:31.328175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:17:11.959 [2024-11-20 06:15:31.328182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.340732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.959 [2024-11-20 06:15:31.340766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:11.959 [2024-11-20 06:15:31.340782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:17:11.959 [2024-11-20 06:15:31.340791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.341157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.959 [2024-11-20 06:15:31.341174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:11.959 [2024-11-20 06:15:31.341185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:17:11.959 [2024-11-20 06:15:31.341194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.384802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.384847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.959 [2024-11-20 06:15:31.384861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.384870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.386107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.386135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.959 [2024-11-20 06:15:31.386146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.386156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.386214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.386224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.959 [2024-11-20 06:15:31.386235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.386242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.386260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.386268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.959 [2024-11-20 06:15:31.386277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.386285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.464050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.464096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.959 [2024-11-20 06:15:31.464110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.464117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.959 [2024-11-20 06:15:31.526569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.526579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.959 [2024-11-20 06:15:31.526694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.526701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.959 [2024-11-20 06:15:31.526748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.526765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.959 [2024-11-20 06:15:31.526875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.526882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:11.959 [2024-11-20 06:15:31.526933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.526940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.526977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.526986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.959 [2024-11-20 06:15:31.526997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.527004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.527045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.959 [2024-11-20 06:15:31.527055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.959 [2024-11-20 06:15:31.527065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.959 [2024-11-20 06:15:31.527072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.959 [2024-11-20 06:15:31.527204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.340 ms, result 0 00:17:12.896 06:15:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:12.896 06:15:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.896 [2024-11-20 06:15:32.245612] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:12.896 [2024-11-20 06:15:32.246033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74034 ] 00:17:12.896 [2024-11-20 06:15:32.405811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.896 [2024-11-20 06:15:32.507126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.154 [2024-11-20 06:15:32.758925] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.154 [2024-11-20 06:15:32.758990] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.414 [2024-11-20 06:15:32.916897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.414 [2024-11-20 06:15:32.916949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.414 [2024-11-20 06:15:32.916962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:13.414 [2024-11-20 06:15:32.916970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.414 [2024-11-20 06:15:32.919615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.414 [2024-11-20 06:15:32.919652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.414 [2024-11-20 06:15:32.919662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.627 ms 00:17:13.414 [2024-11-20 06:15:32.919670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.414 [2024-11-20 06:15:32.919741] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.414 [2024-11-20 06:15:32.920390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.415 [2024-11-20 06:15:32.920416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.920424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.415 [2024-11-20 06:15:32.920434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:17:13.415 [2024-11-20 06:15:32.920441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.922095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:13.415 [2024-11-20 06:15:32.934527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.934567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:13.415 [2024-11-20 06:15:32.934580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:17:13.415 [2024-11-20 06:15:32.934588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.934675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.934687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:13.415 [2024-11-20 06:15:32.934696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:13.415 [2024-11-20 06:15:32.934703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.939554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.939583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.415 [2024-11-20 06:15:32.939592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:17:13.415 [2024-11-20 06:15:32.939599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.939686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.939696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.415 [2024-11-20 06:15:32.939705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:13.415 [2024-11-20 06:15:32.939712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.939737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.939747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.415 [2024-11-20 06:15:32.939756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:13.415 [2024-11-20 06:15:32.939764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.939785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:13.415 [2024-11-20 06:15:32.943029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.943058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.415 [2024-11-20 06:15:32.943067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:17:13.415 [2024-11-20 06:15:32.943074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.943108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.943117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.415 [2024-11-20 06:15:32.943125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:13.415 [2024-11-20 06:15:32.943132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.943149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:13.415 [2024-11-20 06:15:32.943169] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:13.415 [2024-11-20 06:15:32.943203] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:13.415 [2024-11-20 06:15:32.943219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:13.415 [2024-11-20 06:15:32.943320] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:13.415 [2024-11-20 06:15:32.943337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.415 [2024-11-20 06:15:32.943347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:13.415 [2024-11-20 06:15:32.943357] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:13.415 [2024-11-20 06:15:32.943386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.415 [2024-11-20 06:15:32.943393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:13.415 [2024-11-20 06:15:32.943400] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:13.415 [2024-11-20 06:15:32.943408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.943415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.415 [2024-11-20 06:15:32.943422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:13.415 [2024-11-20 06:15:32.943429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.943530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.415 [2024-11-20 06:15:32.943542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.415 [2024-11-20 06:15:32.943550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:13.415 [2024-11-20 06:15:32.943558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.415 [2024-11-20 06:15:32.943655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.415 [2024-11-20 06:15:32.943671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.415 [2024-11-20 06:15:32.943680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.415 [2024-11-20 06:15:32.943703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.415 [2024-11-20 06:15:32.943724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.415 [2024-11-20 06:15:32.943737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.415 [2024-11-20 06:15:32.943744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:13.415 [2024-11-20 06:15:32.943751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.415 [2024-11-20 06:15:32.943763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.415 [2024-11-20 06:15:32.943769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:13.415 [2024-11-20 06:15:32.943777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.415 [2024-11-20 06:15:32.943789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.415 [2024-11-20 06:15:32.943809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.415 [2024-11-20 06:15:32.943828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.415 [2024-11-20 06:15:32.943847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.415 [2024-11-20 06:15:32.943860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.415 [2024-11-20 06:15:32.943866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:13.415 [2024-11-20 06:15:32.943872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.416 [2024-11-20 06:15:32.943878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.416 [2024-11-20 06:15:32.943884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:13.416 [2024-11-20 06:15:32.943890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.416 [2024-11-20 06:15:32.943896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.416 [2024-11-20 06:15:32.943903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:13.416 [2024-11-20 06:15:32.943909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.416 [2024-11-20 06:15:32.943915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:13.416 [2024-11-20 06:15:32.943921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:13.416 [2024-11-20 06:15:32.943928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.416 [2024-11-20 06:15:32.943935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:13.416 [2024-11-20 06:15:32.943941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:13.416 [2024-11-20 06:15:32.943948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.416 [2024-11-20 06:15:32.943954] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.416 [2024-11-20 06:15:32.943962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.416 [2024-11-20 06:15:32.943969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.416 [2024-11-20 06:15:32.943978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.416 [2024-11-20 06:15:32.943985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.416 [2024-11-20 06:15:32.943992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.416 [2024-11-20 06:15:32.943999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.416 [2024-11-20 06:15:32.944005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.416 [2024-11-20 06:15:32.944011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.416 [2024-11-20 06:15:32.944017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.416 [2024-11-20 06:15:32.944025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.416 [2024-11-20 06:15:32.944034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:13.416 [2024-11-20 06:15:32.944049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:13.416 [2024-11-20 06:15:32.944055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:13.416 [2024-11-20 06:15:32.944062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:13.416 [2024-11-20 06:15:32.944070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:13.416 [2024-11-20 06:15:32.944076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:13.416 [2024-11-20 06:15:32.944083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:13.416 [2024-11-20 06:15:32.944090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:13.416 [2024-11-20 06:15:32.944097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:13.416 [2024-11-20 06:15:32.944104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:13.416 [2024-11-20 06:15:32.944140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.416 [2024-11-20 06:15:32.944148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.416 [2024-11-20 06:15:32.944162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.416 [2024-11-20 06:15:32.944170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.416 [2024-11-20 06:15:32.944178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.416 [2024-11-20 06:15:32.944185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:32.944192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.416 [2024-11-20 06:15:32.944202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:17:13.416 [2024-11-20 06:15:32.944209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:32.969928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:32.969962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.416 [2024-11-20 06:15:32.969972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.656 ms 00:17:13.416 [2024-11-20 06:15:32.969979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:32.970097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:32.970110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:13.416 [2024-11-20 06:15:32.970119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:13.416 [2024-11-20 06:15:32.970126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.014912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.014958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.416 [2024-11-20 06:15:33.014970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.765 ms 00:17:13.416 [2024-11-20 06:15:33.014981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.015091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.015103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.416 [2024-11-20 06:15:33.015112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.416 [2024-11-20 06:15:33.015120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.015447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.015472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.416 [2024-11-20 06:15:33.015481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:13.416 [2024-11-20 06:15:33.015505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.015632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.015647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.416 [2024-11-20 06:15:33.015656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:13.416 [2024-11-20 06:15:33.015663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.028975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.029007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.416 [2024-11-20 06:15:33.029017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.291 ms 00:17:13.416 [2024-11-20 06:15:33.029024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.416 [2024-11-20 06:15:33.041981] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:13.416 [2024-11-20 06:15:33.042017] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:13.416 [2024-11-20 06:15:33.042029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.416 [2024-11-20 06:15:33.042038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:13.416 [2024-11-20 06:15:33.042046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.903 ms 00:17:13.416 [2024-11-20 06:15:33.042054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.066716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.066767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:13.678 [2024-11-20 06:15:33.066779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.588 ms 00:17:13.678 [2024-11-20 06:15:33.066789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.078567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.078598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:13.678 [2024-11-20 06:15:33.078608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:17:13.678 [2024-11-20 06:15:33.078615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.089597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.089630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:13.678 [2024-11-20 06:15:33.089640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.917 ms 00:17:13.678 [2024-11-20 06:15:33.089648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.090261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.090288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:13.678 [2024-11-20 06:15:33.090298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:13.678 [2024-11-20 06:15:33.090305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.146322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.146375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:13.678 [2024-11-20 06:15:33.146388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.992 ms 00:17:13.678 [2024-11-20 06:15:33.146396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.156884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:13.678 [2024-11-20 06:15:33.170840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.170874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:13.678 [2024-11-20 06:15:33.170887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.309 ms 00:17:13.678 [2024-11-20 06:15:33.170901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.170989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.170999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:13.678 [2024-11-20 06:15:33.171008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:13.678 [2024-11-20 06:15:33.171015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.171063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.171073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:13.678 [2024-11-20 06:15:33.171081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:13.678 [2024-11-20 06:15:33.171089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.171114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.171123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:13.678 [2024-11-20 06:15:33.171131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:13.678 [2024-11-20 06:15:33.171138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.171170] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:13.678 [2024-11-20 06:15:33.171180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.171187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:13.678 [2024-11-20 06:15:33.171196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:13.678 [2024-11-20 06:15:33.171203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.195285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.195322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:13.678 [2024-11-20 06:15:33.195334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.063 ms 00:17:13.678 [2024-11-20 06:15:33.195343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.195436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.678 [2024-11-20 06:15:33.195447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:13.678 [2024-11-20 06:15:33.195456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:13.678 [2024-11-20 06:15:33.195464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.678 [2024-11-20 06:15:33.196347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.678 [2024-11-20 06:15:33.199295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.177 ms, result 0 00:17:13.678 [2024-11-20 06:15:33.199966] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.678 [2024-11-20 06:15:33.212767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.619  [2024-11-20T06:15:35.695Z] Copying: 34/256 [MB] (34 MBps) [2024-11-20T06:15:36.260Z] Copying: 60/256 [MB] (25 MBps) [2024-11-20T06:15:37.634Z] Copying: 95/256 [MB] (34 MBps) [2024-11-20T06:15:38.570Z] Copying: 137/256 [MB] (41 MBps) [2024-11-20T06:15:39.505Z] Copying: 176/256 [MB] (38 MBps) [2024-11-20T06:15:40.159Z] Copying: 216/256 [MB] (40 MBps) [2024-11-20T06:15:40.159Z] Copying: 256/256 [MB] (average 37 MBps)[2024-11-20 06:15:40.111368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.526 [2024-11-20 06:15:40.120708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.526 [2024-11-20 06:15:40.120751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.526 [2024-11-20 06:15:40.120764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.526 [2024-11-20 06:15:40.120778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.526 [2024-11-20 06:15:40.120811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.526 [2024-11-20 06:15:40.123430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.526 [2024-11-20 06:15:40.123459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.526 [2024-11-20 06:15:40.123469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:17:20.526 [2024-11-20 06:15:40.123477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.526 [2024-11-20 06:15:40.123743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.526 [2024-11-20 06:15:40.123763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.526 [2024-11-20 06:15:40.123772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:20.526 [2024-11-20 06:15:40.123780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.526 [2024-11-20 06:15:40.127466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.526 [2024-11-20 06:15:40.127500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.527 [2024-11-20 06:15:40.127510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:17:20.527 [2024-11-20 06:15:40.127519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.527 [2024-11-20 06:15:40.134415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.527 [2024-11-20 06:15:40.134440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.527 [2024-11-20 06:15:40.134450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:17:20.527 [2024-11-20 06:15:40.134459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.159299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.159345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.787 [2024-11-20 06:15:40.159357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.774 ms 00:17:20.787 [2024-11-20 06:15:40.159366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.174145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.174193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.787 [2024-11-20 06:15:40.174210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:17:20.787 [2024-11-20 06:15:40.174219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.174363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.174374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.787 [2024-11-20 06:15:40.174383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:20.787 [2024-11-20 06:15:40.174390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.198574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.198623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.787 [2024-11-20 06:15:40.198636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:17:20.787 [2024-11-20 06:15:40.198644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.223732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.223779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.787 [2024-11-20 06:15:40.223792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.056 ms 00:17:20.787 [2024-11-20 06:15:40.223800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.253939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.253982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.787 [2024-11-20 06:15:40.253996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.086 ms 00:17:20.787 [2024-11-20 06:15:40.254004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.276354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.787 [2024-11-20 06:15:40.276393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.787 [2024-11-20 06:15:40.276406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.272 ms 00:17:20.787 [2024-11-20 06:15:40.276414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.787 [2024-11-20 06:15:40.276440] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.787 [2024-11-20 06:15:40.276455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.787 [2024-11-20 06:15:40.276833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.276993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.788 [2024-11-20 06:15:40.277243] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.788 [2024-11-20 06:15:40.277251] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:20.788 [2024-11-20 06:15:40.277259] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.788 [2024-11-20 06:15:40.277266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.788 [2024-11-20 06:15:40.277273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.788 [2024-11-20 06:15:40.277280] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.788 [2024-11-20 06:15:40.277287] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.788 [2024-11-20 06:15:40.277296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.788 [2024-11-20 06:15:40.277303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.788 [2024-11-20 06:15:40.277309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.788 [2024-11-20 06:15:40.277315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.788 [2024-11-20 06:15:40.277322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.788 [2024-11-20 06:15:40.277331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.788 [2024-11-20 06:15:40.277339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:17:20.788 [2024-11-20 06:15:40.277346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.290048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.788 [2024-11-20 06:15:40.290083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.788 [2024-11-20 06:15:40.290094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:17:20.788 [2024-11-20 06:15:40.290103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.290472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.788 [2024-11-20 06:15:40.290482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.788 [2024-11-20 06:15:40.290501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:20.788 [2024-11-20 06:15:40.290508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.324988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.788 [2024-11-20 06:15:40.325030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.788 [2024-11-20 06:15:40.325042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.788 [2024-11-20 06:15:40.325050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.325141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.788 [2024-11-20 06:15:40.325149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.788 [2024-11-20 06:15:40.325157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.788 [2024-11-20 06:15:40.325164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.325210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.788 [2024-11-20 06:15:40.325219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.788 [2024-11-20 06:15:40.325226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.788 [2024-11-20 06:15:40.325233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.325250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.788 [2024-11-20 06:15:40.325260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.788 [2024-11-20 06:15:40.325268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.788 [2024-11-20 06:15:40.325274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.788 [2024-11-20 06:15:40.401489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.788 [2024-11-20 06:15:40.401544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.788 [2024-11-20 06:15:40.401556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.788 [2024-11-20 06:15:40.401564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.048 [2024-11-20 06:15:40.465590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.048 [2024-11-20 06:15:40.465675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.048 [2024-11-20 06:15:40.465731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.048 [2024-11-20 06:15:40.465839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.048 [2024-11-20 06:15:40.465891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.465936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.465944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.048 [2024-11-20 06:15:40.465952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.465959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.466000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.048 [2024-11-20 06:15:40.466009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.048 [2024-11-20 06:15:40.466019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.048 [2024-11-20 06:15:40.466026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.048 [2024-11-20 06:15:40.466154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.427 ms, result 0 00:17:21.613 00:17:21.613 00:17:21.613 06:15:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:21.613 06:15:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.178 06:15:41 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.435 [2024-11-20 06:15:41.845175] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:22.435 [2024-11-20 06:15:41.845296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74144 ] 00:17:22.435 [2024-11-20 06:15:42.006252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.692 [2024-11-20 06:15:42.103911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.949 [2024-11-20 06:15:42.354764] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.950 [2024-11-20 06:15:42.354828] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.209 [2024-11-20 06:15:42.630035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.630085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.209 [2024-11-20 06:15:42.630098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.209 [2024-11-20 06:15:42.630106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.632719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.632753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.209 [2024-11-20 06:15:42.632763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:23.209 [2024-11-20 06:15:42.632771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.632839] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.209 [2024-11-20 06:15:42.633483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.209 [2024-11-20 06:15:42.633519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.633527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.209 [2024-11-20 06:15:42.633536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:17:23.209 [2024-11-20 06:15:42.633544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.634709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.209 [2024-11-20 06:15:42.646958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.646989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.209 [2024-11-20 06:15:42.647000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.251 ms 00:17:23.209 [2024-11-20 06:15:42.647008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.647084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.647095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.209 [2024-11-20 06:15:42.647104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:23.209 [2024-11-20 06:15:42.647112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.651850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.651880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.209 [2024-11-20 06:15:42.651889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:17:23.209 [2024-11-20 06:15:42.651896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.651985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.651995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.209 [2024-11-20 06:15:42.652003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:23.209 [2024-11-20 06:15:42.652010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.652036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.652045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.209 [2024-11-20 06:15:42.652052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.209 [2024-11-20 06:15:42.652059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.652078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.209 [2024-11-20 06:15:42.655259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.655286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.209 [2024-11-20 06:15:42.655295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.185 ms 00:17:23.209 [2024-11-20 06:15:42.655302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.655335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.209 [2024-11-20 06:15:42.655344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.209 [2024-11-20 06:15:42.655352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:23.209 [2024-11-20 06:15:42.655359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.209 [2024-11-20 06:15:42.655379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.209 [2024-11-20 06:15:42.655395] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:23.209 [2024-11-20 06:15:42.655427] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.209 [2024-11-20 06:15:42.655442] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:23.209 [2024-11-20 06:15:42.655555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:23.209 [2024-11-20 06:15:42.655567] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.209 [2024-11-20 06:15:42.655577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:23.209 [2024-11-20 06:15:42.655590] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.210 [2024-11-20 06:15:42.655599] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.210 [2024-11-20 06:15:42.655607] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.210 [2024-11-20 06:15:42.655614] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.210 [2024-11-20 06:15:42.655621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:23.210 [2024-11-20 06:15:42.655629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:23.210 [2024-11-20 06:15:42.655636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.210 [2024-11-20 06:15:42.655644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.210 [2024-11-20 06:15:42.655651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:23.210 [2024-11-20 06:15:42.655658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.210 [2024-11-20 06:15:42.655746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.210 [2024-11-20 06:15:42.655757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.210 [2024-11-20 06:15:42.655765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:23.210 [2024-11-20 06:15:42.655772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.210 [2024-11-20 06:15:42.655869] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.210 [2024-11-20 06:15:42.655887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.210 [2024-11-20 06:15:42.655895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.210 [2024-11-20 06:15:42.655903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.655911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.210 [2024-11-20 06:15:42.655917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.655924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.210 [2024-11-20 06:15:42.655930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.210 [2024-11-20 06:15:42.655937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.210 [2024-11-20 06:15:42.655943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.210 [2024-11-20 06:15:42.655950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.210 [2024-11-20 06:15:42.655957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.210 [2024-11-20 06:15:42.655963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.210 [2024-11-20 06:15:42.655976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.210 [2024-11-20 06:15:42.655982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:23.210 [2024-11-20 06:15:42.655990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.655996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.210 [2024-11-20 06:15:42.656004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.210 [2024-11-20 06:15:42.656023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.210 [2024-11-20 06:15:42.656041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.210 [2024-11-20 06:15:42.656060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.210 [2024-11-20 06:15:42.656079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.210 [2024-11-20 06:15:42.656098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.210 [2024-11-20 06:15:42.656111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.210 [2024-11-20 06:15:42.656117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:23.210 [2024-11-20 06:15:42.656123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.210 [2024-11-20 06:15:42.656130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:23.210 [2024-11-20 06:15:42.656136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:23.210 [2024-11-20 06:15:42.656142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:23.210 [2024-11-20 06:15:42.656155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:23.210 [2024-11-20 06:15:42.656161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.210 [2024-11-20 06:15:42.656176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.210 [2024-11-20 06:15:42.656185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.210 [2024-11-20 06:15:42.656200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.210 [2024-11-20 06:15:42.656207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.210 [2024-11-20 06:15:42.656213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.210 [2024-11-20 06:15:42.656220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.210 [2024-11-20 06:15:42.656226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.210 [2024-11-20 06:15:42.656232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.210 [2024-11-20 06:15:42.656240] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.210 [2024-11-20 06:15:42.656249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.210 [2024-11-20 06:15:42.656264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:23.210 [2024-11-20 06:15:42.656270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.210 [2024-11-20 06:15:42.656277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:23.210 [2024-11-20 06:15:42.656284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:23.210 [2024-11-20 06:15:42.656291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:23.210 [2024-11-20 06:15:42.656299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:23.210 [2024-11-20 06:15:42.656306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:23.210 [2024-11-20 06:15:42.656312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:23.210 [2024-11-20 06:15:42.656319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:23.210 [2024-11-20 06:15:42.656353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.210 [2024-11-20 06:15:42.656361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.210 [2024-11-20 06:15:42.656375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.210 [2024-11-20 06:15:42.656382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.210 [2024-11-20 06:15:42.656389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.210 [2024-11-20 06:15:42.656397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.210 [2024-11-20 06:15:42.656406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.210 [2024-11-20 06:15:42.656413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:17:23.210 [2024-11-20 06:15:42.656420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.210 [2024-11-20 06:15:42.681518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.210 [2024-11-20 06:15:42.681552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.210 [2024-11-20 06:15:42.681562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.034 ms 00:17:23.210 [2024-11-20 06:15:42.681569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.210 [2024-11-20 06:15:42.681687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.210 [2024-11-20 06:15:42.681697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.210 [2024-11-20 06:15:42.681705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:23.211 [2024-11-20 06:15:42.681712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.723235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.723277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.211 [2024-11-20 06:15:42.723292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.503 ms 00:17:23.211 [2024-11-20 06:15:42.723300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.723397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.723409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.211 [2024-11-20 06:15:42.723418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.211 [2024-11-20 06:15:42.723425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.723739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.723766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.211 [2024-11-20 06:15:42.723776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:23.211 [2024-11-20 06:15:42.723787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.723918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.723937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.211 [2024-11-20 06:15:42.723946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:23.211 [2024-11-20 06:15:42.723953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.736968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.736998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.211 [2024-11-20 06:15:42.737008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:17:23.211 [2024-11-20 06:15:42.737015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.749246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:23.211 [2024-11-20 06:15:42.749281] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.211 [2024-11-20 06:15:42.749293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.749301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.211 [2024-11-20 06:15:42.749310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.175 ms 00:17:23.211 [2024-11-20 06:15:42.749317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.773383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.773425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.211 [2024-11-20 06:15:42.773437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.993 ms 00:17:23.211 [2024-11-20 06:15:42.773446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.784855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.784884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.211 [2024-11-20 06:15:42.784894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.335 ms 00:17:23.211 [2024-11-20 06:15:42.784901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.796142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.796171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.211 [2024-11-20 06:15:42.796181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.178 ms 00:17:23.211 [2024-11-20 06:15:42.796189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.211 [2024-11-20 06:15:42.796820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.211 [2024-11-20 06:15:42.796844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.211 [2024-11-20 06:15:42.796853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:23.211 [2024-11-20 06:15:42.796861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.851505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.851556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.470 [2024-11-20 06:15:42.851568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.609 ms 00:17:23.470 [2024-11-20 06:15:42.851576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.862205] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.470 [2024-11-20 06:15:42.876235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.876272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.470 [2024-11-20 06:15:42.876285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.545 ms 00:17:23.470 [2024-11-20 06:15:42.876296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.876380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.876391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.470 [2024-11-20 06:15:42.876400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:23.470 [2024-11-20 06:15:42.876407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.876454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.876463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.470 [2024-11-20 06:15:42.876471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:23.470 [2024-11-20 06:15:42.876478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.876521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.876531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.470 [2024-11-20 06:15:42.876538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:23.470 [2024-11-20 06:15:42.876546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.876577] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.470 [2024-11-20 06:15:42.876587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.876594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.470 [2024-11-20 06:15:42.876602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.470 [2024-11-20 06:15:42.876608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.900381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.900427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.470 [2024-11-20 06:15:42.900439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.751 ms 00:17:23.470 [2024-11-20 06:15:42.900448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.900568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:42.900581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.470 [2024-11-20 06:15:42.900590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:23.470 [2024-11-20 06:15:42.900597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:42.901827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.470 [2024-11-20 06:15:42.904894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.490 ms, result 0 00:17:23.470 [2024-11-20 06:15:42.905437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.470 [2024-11-20 06:15:42.918382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.470  [2024-11-20T06:15:43.103Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-11-20 06:15:43.016233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.470 [2024-11-20 06:15:43.025257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.025295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.470 [2024-11-20 06:15:43.025307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.470 [2024-11-20 06:15:43.025319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.025341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.470 [2024-11-20 06:15:43.027901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.027929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.470 [2024-11-20 06:15:43.027940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:17:23.470 [2024-11-20 06:15:43.027949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.029452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.029483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.470 [2024-11-20 06:15:43.029504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:17:23.470 [2024-11-20 06:15:43.029512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.033482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.033517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.470 [2024-11-20 06:15:43.033526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:17:23.470 [2024-11-20 06:15:43.033534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.040594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.040621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.470 [2024-11-20 06:15:43.040631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.035 ms 00:17:23.470 [2024-11-20 06:15:43.040640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.063355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.063387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.470 [2024-11-20 06:15:43.063398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.662 ms 00:17:23.470 [2024-11-20 06:15:43.063406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.077092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.077129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.470 [2024-11-20 06:15:43.077142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.650 ms 00:17:23.470 [2024-11-20 06:15:43.077150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.077269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.077278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.470 [2024-11-20 06:15:43.077287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:23.470 [2024-11-20 06:15:43.077294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.470 [2024-11-20 06:15:43.099940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.470 [2024-11-20 06:15:43.099972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.470 [2024-11-20 06:15:43.099982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.623 ms 00:17:23.470 [2024-11-20 06:15:43.099989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.730 [2024-11-20 06:15:43.246829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.730 [2024-11-20 06:15:43.246884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.730 [2024-11-20 06:15:43.246897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.805 ms 00:17:23.730 [2024-11-20 06:15:43.246906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.730 [2024-11-20 06:15:43.269287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.730 [2024-11-20 06:15:43.269345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.730 [2024-11-20 06:15:43.269364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.336 ms 00:17:23.730 [2024-11-20 06:15:43.269377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.730 [2024-11-20 06:15:43.291862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.730 [2024-11-20 06:15:43.291905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.730 [2024-11-20 06:15:43.291918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.395 ms 00:17:23.730 [2024-11-20 06:15:43.291925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.730 [2024-11-20 06:15:43.291961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.730 [2024-11-20 06:15:43.291976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.291986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.291994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.730 [2024-11-20 06:15:43.292232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.731 [2024-11-20 06:15:43.292768] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.731 [2024-11-20 06:15:43.292775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:23.731 [2024-11-20 06:15:43.292783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.731 [2024-11-20 06:15:43.292791] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.731 [2024-11-20 06:15:43.292798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.731 [2024-11-20 06:15:43.292805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.731 [2024-11-20 06:15:43.292812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.731 [2024-11-20 06:15:43.292820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.731 [2024-11-20 06:15:43.292827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.731 [2024-11-20 06:15:43.292833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.731 [2024-11-20 06:15:43.292839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.731 [2024-11-20 06:15:43.292846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.731 [2024-11-20 06:15:43.292855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.731 [2024-11-20 06:15:43.292865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:17:23.731 [2024-11-20 06:15:43.292872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.731 [2024-11-20 06:15:43.305062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.731 [2024-11-20 06:15:43.305095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.731 [2024-11-20 06:15:43.305105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.172 ms 00:17:23.731 [2024-11-20 06:15:43.305112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.731 [2024-11-20 06:15:43.305467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.731 [2024-11-20 06:15:43.305487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.731 [2024-11-20 06:15:43.305512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:23.731 [2024-11-20 06:15:43.305521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.731 [2024-11-20 06:15:43.339974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.731 [2024-11-20 06:15:43.340009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.732 [2024-11-20 06:15:43.340019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.732 [2024-11-20 06:15:43.340027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.732 [2024-11-20 06:15:43.340101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.732 [2024-11-20 06:15:43.340109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.732 [2024-11-20 06:15:43.340116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.732 [2024-11-20 06:15:43.340123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.732 [2024-11-20 06:15:43.340164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.732 [2024-11-20 06:15:43.340173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.732 [2024-11-20 06:15:43.340181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.732 [2024-11-20 06:15:43.340188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.732 [2024-11-20 06:15:43.340204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.732 [2024-11-20 06:15:43.340214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.732 [2024-11-20 06:15:43.340223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.732 [2024-11-20 06:15:43.340229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.416335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.416383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.024 [2024-11-20 06:15:43.416395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.416402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.479714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.479752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.024 [2024-11-20 06:15:43.479763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.479771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.479822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.479831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.024 [2024-11-20 06:15:43.479839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.479846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.479873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.479880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.024 [2024-11-20 06:15:43.479892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.479899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.479981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.479991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.024 [2024-11-20 06:15:43.479998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.480005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.480038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.480047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.024 [2024-11-20 06:15:43.480057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.480064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.480097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.480105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.024 [2024-11-20 06:15:43.480113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.480120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.480159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.024 [2024-11-20 06:15:43.480173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.024 [2024-11-20 06:15:43.480184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.024 [2024-11-20 06:15:43.480191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.024 [2024-11-20 06:15:43.480314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.049 ms, result 0 00:17:25.441 00:17:25.441 00:17:25.441 06:15:44 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74175 00:17:25.441 06:15:44 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74175 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74175 ']' 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.441 06:15:44 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.441 06:15:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:25.441 [2024-11-20 06:15:44.763553] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:25.441 [2024-11-20 06:15:44.763671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74175 ] 00:17:25.441 [2024-11-20 06:15:44.925303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.441 [2024-11-20 06:15:45.026129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.012 06:15:45 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.012 06:15:45 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:26.012 06:15:45 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:26.271 [2024-11-20 06:15:45.814328] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.271 [2024-11-20 06:15:45.814398] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.534 [2024-11-20 06:15:45.975089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.975146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.534 [2024-11-20 06:15:45.975161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.534 [2024-11-20 06:15:45.975169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.977819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.977854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.534 [2024-11-20 06:15:45.977865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:17:26.534 [2024-11-20 06:15:45.977872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.977970] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.534 [2024-11-20 06:15:45.978639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.534 [2024-11-20 06:15:45.978666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.978673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.534 [2024-11-20 06:15:45.978684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:17:26.534 [2024-11-20 06:15:45.978691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.979814] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.534 [2024-11-20 06:15:45.992342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.992379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.534 [2024-11-20 06:15:45.992390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.532 ms 00:17:26.534 [2024-11-20 06:15:45.992400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.992481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.992506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.534 [2024-11-20 06:15:45.992515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:26.534 [2024-11-20 06:15:45.992524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.997322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.997356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.534 [2024-11-20 06:15:45.997365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:17:26.534 [2024-11-20 06:15:45.997374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.997481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.997504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.534 [2024-11-20 06:15:45.997513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.534 [2024-11-20 06:15:45.997522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.997554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:45.997564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.534 [2024-11-20 06:15:45.997571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.534 [2024-11-20 06:15:45.997579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:45.997602] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.534 [2024-11-20 06:15:46.001012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:46.001038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.534 [2024-11-20 06:15:46.001048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:17:26.534 [2024-11-20 06:15:46.001056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:46.001091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:46.001099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.534 [2024-11-20 06:15:46.001109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.534 [2024-11-20 06:15:46.001118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:46.001139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.534 [2024-11-20 06:15:46.001156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.534 [2024-11-20 06:15:46.001195] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.534 [2024-11-20 06:15:46.001210] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.534 [2024-11-20 06:15:46.001315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.534 [2024-11-20 06:15:46.001326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.534 [2024-11-20 06:15:46.001342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.534 [2024-11-20 06:15:46.001351] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.534 [2024-11-20 06:15:46.001362] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.534 [2024-11-20 06:15:46.001369] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.534 [2024-11-20 06:15:46.001379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.534 [2024-11-20 06:15:46.001386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.534 [2024-11-20 06:15:46.001396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.534 [2024-11-20 06:15:46.001403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:46.001411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.534 [2024-11-20 06:15:46.001419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:26.534 [2024-11-20 06:15:46.001427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:46.001536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-20 06:15:46.001548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.534 [2024-11-20 06:15:46.001555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:26.534 [2024-11-20 06:15:46.001564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-20 06:15:46.001668] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.534 [2024-11-20 06:15:46.001679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.534 [2024-11-20 06:15:46.001687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.534 [2024-11-20 06:15:46.001696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.534 [2024-11-20 06:15:46.001703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.534 [2024-11-20 06:15:46.001711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.534 [2024-11-20 06:15:46.001719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.534 [2024-11-20 06:15:46.001730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.534 [2024-11-20 06:15:46.001737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.534 [2024-11-20 06:15:46.001745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.535 [2024-11-20 06:15:46.001751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.535 [2024-11-20 06:15:46.001759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.535 [2024-11-20 06:15:46.001766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.535 [2024-11-20 06:15:46.001773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.535 [2024-11-20 06:15:46.001780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.535 [2024-11-20 06:15:46.001789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.535 [2024-11-20 06:15:46.001804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.535 [2024-11-20 06:15:46.001810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.535 [2024-11-20 06:15:46.001829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.535 [2024-11-20 06:15:46.001844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.535 [2024-11-20 06:15:46.001853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.535 [2024-11-20 06:15:46.001868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.535 [2024-11-20 06:15:46.001875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.535 [2024-11-20 06:15:46.001888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.535 [2024-11-20 06:15:46.001896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.535 [2024-11-20 06:15:46.001912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.535 [2024-11-20 06:15:46.001918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.535 [2024-11-20 06:15:46.001932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.535 [2024-11-20 06:15:46.001940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.535 [2024-11-20 06:15:46.001946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.535 [2024-11-20 06:15:46.001954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.535 [2024-11-20 06:15:46.001960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.535 [2024-11-20 06:15:46.001969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.535 [2024-11-20 06:15:46.001984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.535 [2024-11-20 06:15:46.001990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.535 [2024-11-20 06:15:46.001998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.535 [2024-11-20 06:15:46.002007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.535 [2024-11-20 06:15:46.002015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.535 [2024-11-20 06:15:46.002022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.535 [2024-11-20 06:15:46.002032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.535 [2024-11-20 06:15:46.002039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.535 [2024-11-20 06:15:46.002047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.535 [2024-11-20 06:15:46.002054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.535 [2024-11-20 06:15:46.002062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.535 [2024-11-20 06:15:46.002069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.535 [2024-11-20 06:15:46.002078] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.535 [2024-11-20 06:15:46.002088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.535 [2024-11-20 06:15:46.002107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.535 [2024-11-20 06:15:46.002117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.535 [2024-11-20 06:15:46.002123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.535 [2024-11-20 06:15:46.002132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.535 [2024-11-20 06:15:46.002139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.535 [2024-11-20 06:15:46.002148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.535 [2024-11-20 06:15:46.002154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.535 [2024-11-20 06:15:46.002163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.535 [2024-11-20 06:15:46.002170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.535 [2024-11-20 06:15:46.002209] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.535 [2024-11-20 06:15:46.002217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.535 [2024-11-20 06:15:46.002235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.535 [2024-11-20 06:15:46.002243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.535 [2024-11-20 06:15:46.002250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.535 [2024-11-20 06:15:46.002259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.002266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.535 [2024-11-20 06:15:46.002275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:17:26.535 [2024-11-20 06:15:46.002282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.028027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.028066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.535 [2024-11-20 06:15:46.028079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:17:26.535 [2024-11-20 06:15:46.028089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.028220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.028230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.535 [2024-11-20 06:15:46.028240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.535 [2024-11-20 06:15:46.028247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.059296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.059347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.535 [2024-11-20 06:15:46.059361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.022 ms 00:17:26.535 [2024-11-20 06:15:46.059374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.059484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.059517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.535 [2024-11-20 06:15:46.059536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.535 [2024-11-20 06:15:46.059549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.059895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.059923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.535 [2024-11-20 06:15:46.059936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:26.535 [2024-11-20 06:15:46.059944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.060069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.060078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.535 [2024-11-20 06:15:46.060087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:26.535 [2024-11-20 06:15:46.060094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.074874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-20 06:15:46.074909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.535 [2024-11-20 06:15:46.074922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.755 ms 00:17:26.535 [2024-11-20 06:15:46.074930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-20 06:15:46.087476] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:26.535 [2024-11-20 06:15:46.087519] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.535 [2024-11-20 06:15:46.087534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.536 [2024-11-20 06:15:46.087544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.536 [2024-11-20 06:15:46.087555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.478 ms 00:17:26.536 [2024-11-20 06:15:46.087563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.536 [2024-11-20 06:15:46.111788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.536 [2024-11-20 06:15:46.111822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.536 [2024-11-20 06:15:46.111837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.146 ms 00:17:26.536 [2024-11-20 06:15:46.111846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.536 [2024-11-20 06:15:46.123609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.536 [2024-11-20 06:15:46.123640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.536 [2024-11-20 06:15:46.123653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.684 ms 00:17:26.536 [2024-11-20 06:15:46.123660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.536 [2024-11-20 06:15:46.135456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.536 [2024-11-20 06:15:46.135484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.536 [2024-11-20 06:15:46.135503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.728 ms 00:17:26.536 [2024-11-20 06:15:46.135511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.536 [2024-11-20 06:15:46.136133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.536 [2024-11-20 06:15:46.136156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.536 [2024-11-20 06:15:46.136167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:17:26.536 [2024-11-20 06:15:46.136175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.795 [2024-11-20 06:15:46.211583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.795 [2024-11-20 06:15:46.211633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.795 [2024-11-20 06:15:46.211648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.381 ms 00:17:26.795 [2024-11-20 06:15:46.211657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.795 [2024-11-20 06:15:46.222268] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.795 [2024-11-20 06:15:46.236398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.795 [2024-11-20 06:15:46.236437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.795 [2024-11-20 06:15:46.236452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.629 ms 00:17:26.795 [2024-11-20 06:15:46.236462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.236563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.236577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.796 [2024-11-20 06:15:46.236585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.796 [2024-11-20 06:15:46.236595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.236642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.236652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.796 [2024-11-20 06:15:46.236660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:26.796 [2024-11-20 06:15:46.236671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.236693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.236703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.796 [2024-11-20 06:15:46.236711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.796 [2024-11-20 06:15:46.236722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.236751] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.796 [2024-11-20 06:15:46.236764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.236771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.796 [2024-11-20 06:15:46.236782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.796 [2024-11-20 06:15:46.236789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.288158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.288194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.796 [2024-11-20 06:15:46.288208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.342 ms 00:17:26.796 [2024-11-20 06:15:46.288216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.288308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.796 [2024-11-20 06:15:46.288319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.796 [2024-11-20 06:15:46.288329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:26.796 [2024-11-20 06:15:46.288339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.796 [2024-11-20 06:15:46.289712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.796 [2024-11-20 06:15:46.292817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.331 ms, result 0 00:17:26.796 [2024-11-20 06:15:46.295415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.796 Some configs were skipped because the RPC state that can call them passed over. 00:17:26.796 06:15:46 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:27.366 [2024-11-20 06:15:46.713161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.366 [2024-11-20 06:15:46.713230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:27.366 [2024-11-20 06:15:46.713244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.099 ms 00:17:27.366 [2024-11-20 06:15:46.713254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.366 [2024-11-20 06:15:46.713291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 170.239 ms, result 0 00:17:27.366 true 00:17:27.366 06:15:46 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:27.627 [2024-11-20 06:15:47.165509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.627 [2024-11-20 06:15:47.165568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:27.627 [2024-11-20 06:15:47.165583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 249.965 ms 00:17:27.627 [2024-11-20 06:15:47.165591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.627 [2024-11-20 06:15:47.165632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 250.103 ms, result 0 00:17:27.627 true 00:17:27.627 06:15:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74175 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74175 ']' 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74175 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74175 00:17:27.627 killing process with pid 74175 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74175' 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74175 00:17:27.627 06:15:47 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74175 00:17:28.568 [2024-11-20 06:15:47.902848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.902903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.568 [2024-11-20 06:15:47.902915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.568 [2024-11-20 06:15:47.902924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.902946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.568 [2024-11-20 06:15:47.905511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.905550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.568 [2024-11-20 06:15:47.905564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:17:28.568 [2024-11-20 06:15:47.905573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.905829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.905839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.568 [2024-11-20 06:15:47.905850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:28.568 [2024-11-20 06:15:47.905857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.909958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.909989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.568 [2024-11-20 06:15:47.910002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:17:28.568 [2024-11-20 06:15:47.910009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.916974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.917003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.568 [2024-11-20 06:15:47.917015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:17:28.568 [2024-11-20 06:15:47.917023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.926349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.926379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.568 [2024-11-20 06:15:47.926393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.277 ms 00:17:28.568 [2024-11-20 06:15:47.926407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.933307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.933339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.568 [2024-11-20 06:15:47.933350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:17:28.568 [2024-11-20 06:15:47.933358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.568 [2024-11-20 06:15:47.933485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.568 [2024-11-20 06:15:47.933512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.568 [2024-11-20 06:15:47.933522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:28.569 [2024-11-20 06:15:47.933529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.569 [2024-11-20 06:15:47.943419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.569 [2024-11-20 06:15:47.943448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.569 [2024-11-20 06:15:47.943459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.870 ms 00:17:28.569 [2024-11-20 06:15:47.943467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.569 [2024-11-20 06:15:47.952836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.569 [2024-11-20 06:15:47.952870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.569 [2024-11-20 06:15:47.952884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.319 ms 00:17:28.569 [2024-11-20 06:15:47.952891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.569 [2024-11-20 06:15:47.962164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.569 [2024-11-20 06:15:47.962194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.569 [2024-11-20 06:15:47.962208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.237 ms 00:17:28.569 [2024-11-20 06:15:47.962216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.569 [2024-11-20 06:15:47.971297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.569 [2024-11-20 06:15:47.971327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.569 [2024-11-20 06:15:47.971338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.022 ms 00:17:28.569 [2024-11-20 06:15:47.971346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.569 [2024-11-20 06:15:47.971391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.569 [2024-11-20 06:15:47.971407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.569 [2024-11-20 06:15:47.971883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.971991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.570 [2024-11-20 06:15:47.972282] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.570 [2024-11-20 06:15:47.972296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:28.570 [2024-11-20 06:15:47.972309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.570 [2024-11-20 06:15:47.972320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.570 [2024-11-20 06:15:47.972326] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.570 [2024-11-20 06:15:47.972335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.570 [2024-11-20 06:15:47.972343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.570 [2024-11-20 06:15:47.972352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.570 [2024-11-20 06:15:47.972358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.570 [2024-11-20 06:15:47.972366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.570 [2024-11-20 06:15:47.972373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.570 [2024-11-20 06:15:47.972381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.570 [2024-11-20 06:15:47.972388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.570 [2024-11-20 06:15:47.972398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:17:28.570 [2024-11-20 06:15:47.972405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:47.984663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.570 [2024-11-20 06:15:47.984693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.570 [2024-11-20 06:15:47.984708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.234 ms 00:17:28.570 [2024-11-20 06:15:47.984717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:47.985070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.570 [2024-11-20 06:15:47.985086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.570 [2024-11-20 06:15:47.985096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:28.570 [2024-11-20 06:15:47.985105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:48.030313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.570 [2024-11-20 06:15:48.030369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.570 [2024-11-20 06:15:48.030385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.570 [2024-11-20 06:15:48.030393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:48.030536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.570 [2024-11-20 06:15:48.030549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.570 [2024-11-20 06:15:48.030560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.570 [2024-11-20 06:15:48.030570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:48.030617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.570 [2024-11-20 06:15:48.030627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.570 [2024-11-20 06:15:48.030638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.570 [2024-11-20 06:15:48.030645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:48.030664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.570 [2024-11-20 06:15:48.030672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.570 [2024-11-20 06:15:48.030681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.570 [2024-11-20 06:15:48.030688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.570 [2024-11-20 06:15:48.108918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.570 [2024-11-20 06:15:48.108972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.571 [2024-11-20 06:15:48.108986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.108994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.571 [2024-11-20 06:15:48.163612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.571 [2024-11-20 06:15:48.163712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.571 [2024-11-20 06:15:48.163759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.571 [2024-11-20 06:15:48.163855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.571 [2024-11-20 06:15:48.163901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.163944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.571 [2024-11-20 06:15:48.163953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.163959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.163993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.571 [2024-11-20 06:15:48.164001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.571 [2024-11-20 06:15:48.164009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.571 [2024-11-20 06:15:48.164014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.571 [2024-11-20 06:15:48.164121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.258 ms, result 0 00:17:29.506 06:15:48 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:29.506 [2024-11-20 06:15:48.886943] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:29.506 [2024-11-20 06:15:48.887067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74233 ] 00:17:29.506 [2024-11-20 06:15:49.047647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.506 [2024-11-20 06:15:49.130608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.764 [2024-11-20 06:15:49.342406] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.764 [2024-11-20 06:15:49.342466] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.023 [2024-11-20 06:15:49.490797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.490846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.023 [2024-11-20 06:15:49.490857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.023 [2024-11-20 06:15:49.490864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.492993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.493025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.023 [2024-11-20 06:15:49.493032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:17:30.023 [2024-11-20 06:15:49.493038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.493206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.023 [2024-11-20 06:15:49.493800] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.023 [2024-11-20 06:15:49.493824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.493831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.023 [2024-11-20 06:15:49.493838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:17:30.023 [2024-11-20 06:15:49.493845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.494971] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.023 [2024-11-20 06:15:49.504604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.504636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.023 [2024-11-20 06:15:49.504644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.634 ms 00:17:30.023 [2024-11-20 06:15:49.504651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.504717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.504727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.023 [2024-11-20 06:15:49.504733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:30.023 [2024-11-20 06:15:49.504739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.509242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.509268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.023 [2024-11-20 06:15:49.509276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:17:30.023 [2024-11-20 06:15:49.509282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.509363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.509371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.023 [2024-11-20 06:15:49.509378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:30.023 [2024-11-20 06:15:49.509384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.509403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.509412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.023 [2024-11-20 06:15:49.509419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:30.023 [2024-11-20 06:15:49.509425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.509444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.023 [2024-11-20 06:15:49.512206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.512230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.023 [2024-11-20 06:15:49.512238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:17:30.023 [2024-11-20 06:15:49.512244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.512274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.512281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.023 [2024-11-20 06:15:49.512287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:30.023 [2024-11-20 06:15:49.512293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.512308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.023 [2024-11-20 06:15:49.512324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:30.023 [2024-11-20 06:15:49.512351] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.023 [2024-11-20 06:15:49.512363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:30.023 [2024-11-20 06:15:49.512443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.023 [2024-11-20 06:15:49.512457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.023 [2024-11-20 06:15:49.512466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:30.023 [2024-11-20 06:15:49.512474] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.023 [2024-11-20 06:15:49.512483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.023 [2024-11-20 06:15:49.512500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.023 [2024-11-20 06:15:49.512507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.023 [2024-11-20 06:15:49.512514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.023 [2024-11-20 06:15:49.512519] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.023 [2024-11-20 06:15:49.512525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.512531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.023 [2024-11-20 06:15:49.512537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:17:30.023 [2024-11-20 06:15:49.512543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.512613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.023 [2024-11-20 06:15:49.512623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.023 [2024-11-20 06:15:49.512629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:30.023 [2024-11-20 06:15:49.512634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.023 [2024-11-20 06:15:49.512713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.023 [2024-11-20 06:15:49.512721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.023 [2024-11-20 06:15:49.512728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.023 [2024-11-20 06:15:49.512733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.023 [2024-11-20 06:15:49.512739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.023 [2024-11-20 06:15:49.512745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.023 [2024-11-20 06:15:49.512751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.023 [2024-11-20 06:15:49.512756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.023 [2024-11-20 06:15:49.512762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.023 [2024-11-20 06:15:49.512767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.023 [2024-11-20 06:15:49.512772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.023 [2024-11-20 06:15:49.512777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.023 [2024-11-20 06:15:49.512782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.023 [2024-11-20 06:15:49.512794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.023 [2024-11-20 06:15:49.512800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:30.023 [2024-11-20 06:15:49.512805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.023 [2024-11-20 06:15:49.512810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.023 [2024-11-20 06:15:49.512816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:30.023 [2024-11-20 06:15:49.512821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.023 [2024-11-20 06:15:49.512826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.023 [2024-11-20 06:15:49.512831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.024 [2024-11-20 06:15:49.512841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.024 [2024-11-20 06:15:49.512846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.024 [2024-11-20 06:15:49.512856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.024 [2024-11-20 06:15:49.512861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.024 [2024-11-20 06:15:49.512871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.024 [2024-11-20 06:15:49.512876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.024 [2024-11-20 06:15:49.512885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.024 [2024-11-20 06:15:49.512890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.024 [2024-11-20 06:15:49.512901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.024 [2024-11-20 06:15:49.512906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:30.024 [2024-11-20 06:15:49.512912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.024 [2024-11-20 06:15:49.512917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.024 [2024-11-20 06:15:49.512922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:30.024 [2024-11-20 06:15:49.512927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.024 [2024-11-20 06:15:49.512937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:30.024 [2024-11-20 06:15:49.512942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512947] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.024 [2024-11-20 06:15:49.512953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.024 [2024-11-20 06:15:49.512959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.024 [2024-11-20 06:15:49.512966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.024 [2024-11-20 06:15:49.512972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.024 [2024-11-20 06:15:49.512977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.024 [2024-11-20 06:15:49.512982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.024 [2024-11-20 06:15:49.512988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.024 [2024-11-20 06:15:49.512992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.024 [2024-11-20 06:15:49.512998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.024 [2024-11-20 06:15:49.513004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.024 [2024-11-20 06:15:49.513012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.024 [2024-11-20 06:15:49.513024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:30.024 [2024-11-20 06:15:49.513029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.024 [2024-11-20 06:15:49.513034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:30.024 [2024-11-20 06:15:49.513039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:30.024 [2024-11-20 06:15:49.513044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:30.024 [2024-11-20 06:15:49.513050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:30.024 [2024-11-20 06:15:49.513055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:30.024 [2024-11-20 06:15:49.513060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:30.024 [2024-11-20 06:15:49.513066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:30.024 [2024-11-20 06:15:49.513094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.024 [2024-11-20 06:15:49.513100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.024 [2024-11-20 06:15:49.513111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.024 [2024-11-20 06:15:49.513117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.024 [2024-11-20 06:15:49.513122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.024 [2024-11-20 06:15:49.513128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.513133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.024 [2024-11-20 06:15:49.513141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:17:30.024 [2024-11-20 06:15:49.513147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.534379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.534413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.024 [2024-11-20 06:15:49.534423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.193 ms 00:17:30.024 [2024-11-20 06:15:49.534429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.534551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.534563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.024 [2024-11-20 06:15:49.534570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:30.024 [2024-11-20 06:15:49.534577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.577190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.577231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.024 [2024-11-20 06:15:49.577242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.595 ms 00:17:30.024 [2024-11-20 06:15:49.577250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.577339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.577349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.024 [2024-11-20 06:15:49.577357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.024 [2024-11-20 06:15:49.577363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.577673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.577692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.024 [2024-11-20 06:15:49.577700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:30.024 [2024-11-20 06:15:49.577706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.577819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.577830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.024 [2024-11-20 06:15:49.577837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:30.024 [2024-11-20 06:15:49.577843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.588808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.024 [2024-11-20 06:15:49.588837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.024 [2024-11-20 06:15:49.588846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.948 ms 00:17:30.024 [2024-11-20 06:15:49.588852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.024 [2024-11-20 06:15:49.598613] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:30.024 [2024-11-20 06:15:49.598646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.025 [2024-11-20 06:15:49.598657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.025 [2024-11-20 06:15:49.598665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.025 [2024-11-20 06:15:49.598672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.692 ms 00:17:30.025 [2024-11-20 06:15:49.598678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.025 [2024-11-20 06:15:49.617924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.025 [2024-11-20 06:15:49.617965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.025 [2024-11-20 06:15:49.617975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.183 ms 00:17:30.025 [2024-11-20 06:15:49.617982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.025 [2024-11-20 06:15:49.627010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.025 [2024-11-20 06:15:49.627039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.025 [2024-11-20 06:15:49.627047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.951 ms 00:17:30.025 [2024-11-20 06:15:49.627053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.025 [2024-11-20 06:15:49.636054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.025 [2024-11-20 06:15:49.636082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.025 [2024-11-20 06:15:49.636090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.949 ms 00:17:30.025 [2024-11-20 06:15:49.636096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.025 [2024-11-20 06:15:49.636610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.025 [2024-11-20 06:15:49.636633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.025 [2024-11-20 06:15:49.636640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:17:30.025 [2024-11-20 06:15:49.636646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.681584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.681627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.283 [2024-11-20 06:15:49.681637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.918 ms 00:17:30.283 [2024-11-20 06:15:49.681645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.690319] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.283 [2024-11-20 06:15:49.702551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.702586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.283 [2024-11-20 06:15:49.702597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.812 ms 00:17:30.283 [2024-11-20 06:15:49.702608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.702704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.702713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.283 [2024-11-20 06:15:49.702720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:30.283 [2024-11-20 06:15:49.702726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.702780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.702788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.283 [2024-11-20 06:15:49.702794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:30.283 [2024-11-20 06:15:49.702800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.702822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.702829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.283 [2024-11-20 06:15:49.702835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:30.283 [2024-11-20 06:15:49.702841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.702867] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.283 [2024-11-20 06:15:49.702875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.702880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.283 [2024-11-20 06:15:49.702887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.283 [2024-11-20 06:15:49.702893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.721342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.283 [2024-11-20 06:15:49.721379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.283 [2024-11-20 06:15:49.721388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.434 ms 00:17:30.283 [2024-11-20 06:15:49.721394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.283 [2024-11-20 06:15:49.721482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.284 [2024-11-20 06:15:49.721500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.284 [2024-11-20 06:15:49.721507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:30.284 [2024-11-20 06:15:49.721513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.284 [2024-11-20 06:15:49.722150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.284 [2024-11-20 06:15:49.724565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.146 ms, result 0 00:17:30.284 [2024-11-20 06:15:49.725617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.284 [2024-11-20 06:15:49.736665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.224  [2024-11-20T06:15:51.877Z] Copying: 25/256 [MB] (25 MBps) [2024-11-20T06:15:52.821Z] Copying: 46/256 [MB] (21 MBps) [2024-11-20T06:15:54.203Z] Copying: 68/256 [MB] (22 MBps) [2024-11-20T06:15:55.135Z] Copying: 94/256 [MB] (25 MBps) [2024-11-20T06:15:56.067Z] Copying: 131/256 [MB] (36 MBps) [2024-11-20T06:15:57.000Z] Copying: 171/256 [MB] (40 MBps) [2024-11-20T06:15:57.933Z] Copying: 212/256 [MB] (40 MBps) [2024-11-20T06:15:57.933Z] Copying: 255/256 [MB] (42 MBps) [2024-11-20T06:15:58.501Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-20 06:15:58.219776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.868 [2024-11-20 06:15:58.231100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.868 [2024-11-20 06:15:58.231156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.868 [2024-11-20 06:15:58.231172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:38.868 [2024-11-20 06:15:58.231193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.868 [2024-11-20 06:15:58.231221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.869 [2024-11-20 06:15:58.234457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.234509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.869 [2024-11-20 06:15:58.234522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:17:38.869 [2024-11-20 06:15:58.234533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.234893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.234915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.869 [2024-11-20 06:15:58.234926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:38.869 [2024-11-20 06:15:58.234935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.239233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.239262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.869 [2024-11-20 06:15:58.239272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.278 ms 00:17:38.869 [2024-11-20 06:15:58.239287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.246440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.246473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.869 [2024-11-20 06:15:58.246483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.133 ms 00:17:38.869 [2024-11-20 06:15:58.246499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.274379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.274437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.869 [2024-11-20 06:15:58.274450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.815 ms 00:17:38.869 [2024-11-20 06:15:58.274457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.289074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.289130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.869 [2024-11-20 06:15:58.289145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.523 ms 00:17:38.869 [2024-11-20 06:15:58.289153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.289312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.289324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.869 [2024-11-20 06:15:58.289333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:38.869 [2024-11-20 06:15:58.289340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.313187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.313236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.869 [2024-11-20 06:15:58.313248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.819 ms 00:17:38.869 [2024-11-20 06:15:58.313256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.336186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.336237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.869 [2024-11-20 06:15:58.336249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.879 ms 00:17:38.869 [2024-11-20 06:15:58.336257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.359192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.359245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.869 [2024-11-20 06:15:58.359258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.884 ms 00:17:38.869 [2024-11-20 06:15:58.359265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.381914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.869 [2024-11-20 06:15:58.381960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.869 [2024-11-20 06:15:58.381972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.562 ms 00:17:38.869 [2024-11-20 06:15:58.381980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.869 [2024-11-20 06:15:58.382022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.869 [2024-11-20 06:15:58.382039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.869 [2024-11-20 06:15:58.382433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.870 [2024-11-20 06:15:58.382901] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.870 [2024-11-20 06:15:58.382909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 174d453c-b9d5-4fbe-9e30-c11a2f569373 00:17:38.870 [2024-11-20 06:15:58.382917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.870 [2024-11-20 06:15:58.382924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.870 [2024-11-20 06:15:58.382932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.870 [2024-11-20 06:15:58.382939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.870 [2024-11-20 06:15:58.382946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.870 [2024-11-20 06:15:58.382953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.870 [2024-11-20 06:15:58.382961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.870 [2024-11-20 06:15:58.382968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.870 [2024-11-20 06:15:58.382974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.870 [2024-11-20 06:15:58.382981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.870 [2024-11-20 06:15:58.382990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.870 [2024-11-20 06:15:58.382998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:17:38.870 [2024-11-20 06:15:58.383005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.870 [2024-11-20 06:15:58.395689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.870 [2024-11-20 06:15:58.395732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.870 [2024-11-20 06:15:58.395742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:17:38.870 [2024-11-20 06:15:58.395751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.870 [2024-11-20 06:15:58.396120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.870 [2024-11-20 06:15:58.396142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.870 [2024-11-20 06:15:58.396151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:38.870 [2024-11-20 06:15:58.396158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.870 [2024-11-20 06:15:58.430739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.870 [2024-11-20 06:15:58.430796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.870 [2024-11-20 06:15:58.430808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.870 [2024-11-20 06:15:58.430816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.870 [2024-11-20 06:15:58.430923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.870 [2024-11-20 06:15:58.430933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.870 [2024-11-20 06:15:58.430941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.870 [2024-11-20 06:15:58.430948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.871 [2024-11-20 06:15:58.430992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.871 [2024-11-20 06:15:58.431001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.871 [2024-11-20 06:15:58.431008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.871 [2024-11-20 06:15:58.431016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.871 [2024-11-20 06:15:58.431036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.871 [2024-11-20 06:15:58.431046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.871 [2024-11-20 06:15:58.431054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.871 [2024-11-20 06:15:58.431061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.506956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.507005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.130 [2024-11-20 06:15:58.507016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.507024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.568752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.568804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.130 [2024-11-20 06:15:58.568814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.568823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.568900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.568910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.130 [2024-11-20 06:15:58.568919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.568926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.568954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.568963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.130 [2024-11-20 06:15:58.568973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.568981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.569065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.569075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.130 [2024-11-20 06:15:58.569083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.569091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.569121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.569130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.130 [2024-11-20 06:15:58.569138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.569149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.569184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.569204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.130 [2024-11-20 06:15:58.569212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.569220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.569261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.130 [2024-11-20 06:15:58.569276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.130 [2024-11-20 06:15:58.569286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.130 [2024-11-20 06:15:58.569293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.130 [2024-11-20 06:15:58.569418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.329 ms, result 0 00:17:39.701 00:17:39.701 00:17:39.701 06:15:59 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.269 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.269 06:15:59 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74175 00:17:40.269 06:15:59 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74175 ']' 00:17:40.270 06:15:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74175 00:17:40.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74175) - No such process 00:17:40.270 Process with pid 74175 is not found 00:17:40.270 06:15:59 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74175 is not found' 00:17:40.270 00:17:40.270 real 0m59.645s 00:17:40.270 user 1m34.350s 00:17:40.270 sys 0m5.359s 00:17:40.270 06:15:59 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:40.270 06:15:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:40.270 ************************************ 00:17:40.270 END TEST ftl_trim 00:17:40.270 ************************************ 00:17:40.527 06:15:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:40.527 06:15:59 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:40.527 06:15:59 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:40.527 06:15:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:40.527 ************************************ 00:17:40.527 START TEST ftl_restore 00:17:40.527 ************************************ 00:17:40.527 06:15:59 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:40.527 * Looking for test storage... 00:17:40.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.527 06:15:59 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:40.527 06:15:59 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:40.527 06:15:59 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:40.527 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.527 06:16:00 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.528 06:16:00 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:40.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.528 --rc genhtml_branch_coverage=1 00:17:40.528 --rc genhtml_function_coverage=1 00:17:40.528 --rc genhtml_legend=1 00:17:40.528 --rc geninfo_all_blocks=1 00:17:40.528 --rc geninfo_unexecuted_blocks=1 00:17:40.528 00:17:40.528 ' 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:40.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.528 --rc genhtml_branch_coverage=1 00:17:40.528 --rc genhtml_function_coverage=1 00:17:40.528 --rc genhtml_legend=1 00:17:40.528 --rc geninfo_all_blocks=1 00:17:40.528 --rc geninfo_unexecuted_blocks=1 00:17:40.528 00:17:40.528 ' 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:40.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.528 --rc genhtml_branch_coverage=1 00:17:40.528 --rc genhtml_function_coverage=1 00:17:40.528 --rc genhtml_legend=1 00:17:40.528 --rc geninfo_all_blocks=1 00:17:40.528 --rc geninfo_unexecuted_blocks=1 00:17:40.528 00:17:40.528 ' 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:40.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.528 --rc genhtml_branch_coverage=1 00:17:40.528 --rc genhtml_function_coverage=1 00:17:40.528 --rc genhtml_legend=1 00:17:40.528 --rc geninfo_all_blocks=1 00:17:40.528 --rc geninfo_unexecuted_blocks=1 00:17:40.528 00:17:40.528 ' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VouTc2QF1N 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74417 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74417 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74417 ']' 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.528 06:16:00 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.528 06:16:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:40.528 [2024-11-20 06:16:00.157365] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:17:40.528 [2024-11-20 06:16:00.157501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74417 ] 00:17:40.786 [2024-11-20 06:16:00.315827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.786 [2024-11-20 06:16:00.415069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:41.721 06:16:01 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:41.721 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:41.978 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:41.979 { 00:17:41.979 "name": "nvme0n1", 00:17:41.979 "aliases": [ 00:17:41.979 "edc1e46b-10ab-49ea-bc17-0e1cccbd246d" 00:17:41.979 ], 00:17:41.979 "product_name": "NVMe disk", 00:17:41.979 "block_size": 4096, 00:17:41.979 "num_blocks": 1310720, 00:17:41.979 "uuid": "edc1e46b-10ab-49ea-bc17-0e1cccbd246d", 00:17:41.979 "numa_id": -1, 00:17:41.979 "assigned_rate_limits": { 00:17:41.979 "rw_ios_per_sec": 0, 00:17:41.979 "rw_mbytes_per_sec": 0, 00:17:41.979 "r_mbytes_per_sec": 0, 00:17:41.979 "w_mbytes_per_sec": 0 00:17:41.979 }, 00:17:41.979 "claimed": true, 00:17:41.979 "claim_type": "read_many_write_one", 00:17:41.979 "zoned": false, 00:17:41.979 "supported_io_types": { 00:17:41.979 "read": true, 00:17:41.979 "write": true, 00:17:41.979 "unmap": true, 00:17:41.979 "flush": true, 00:17:41.979 "reset": true, 00:17:41.979 "nvme_admin": true, 00:17:41.979 "nvme_io": true, 00:17:41.979 "nvme_io_md": false, 00:17:41.979 "write_zeroes": true, 00:17:41.979 "zcopy": false, 00:17:41.979 "get_zone_info": false, 00:17:41.979 "zone_management": false, 00:17:41.979 "zone_append": false, 00:17:41.979 "compare": true, 00:17:41.979 "compare_and_write": false, 00:17:41.979 "abort": true, 00:17:41.979 "seek_hole": false, 00:17:41.979 "seek_data": false, 00:17:41.979 "copy": true, 00:17:41.979 "nvme_iov_md": false 00:17:41.979 }, 00:17:41.979 "driver_specific": { 00:17:41.979 "nvme": [ 00:17:41.979 { 00:17:41.979 "pci_address": "0000:00:11.0", 00:17:41.979 "trid": { 00:17:41.979 "trtype": "PCIe", 00:17:41.979 "traddr": "0000:00:11.0" 00:17:41.979 }, 00:17:41.979 "ctrlr_data": { 00:17:41.979 "cntlid": 0, 00:17:41.979 "vendor_id": "0x1b36", 00:17:41.979 "model_number": "QEMU NVMe Ctrl", 00:17:41.979 "serial_number": "12341", 00:17:41.979 "firmware_revision": "8.0.0", 00:17:41.979 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:41.979 "oacs": { 00:17:41.979 "security": 0, 00:17:41.979 "format": 1, 00:17:41.979 "firmware": 0, 00:17:41.979 "ns_manage": 1 00:17:41.979 }, 00:17:41.979 "multi_ctrlr": false, 00:17:41.979 "ana_reporting": false 00:17:41.979 }, 00:17:41.979 "vs": { 00:17:41.979 "nvme_version": "1.4" 00:17:41.979 }, 00:17:41.979 "ns_data": { 00:17:41.979 "id": 1, 00:17:41.979 "can_share": false 00:17:41.979 } 00:17:41.979 } 00:17:41.979 ], 00:17:41.979 "mp_policy": "active_passive" 00:17:41.979 } 00:17:41.979 } 00:17:41.979 ]' 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:41.979 06:16:01 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:17:41.979 06:16:01 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:41.979 06:16:01 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:41.979 06:16:01 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:42.237 06:16:01 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:42.237 06:16:01 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:42.237 06:16:01 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7507f9ff-cad8-417c-bf6b-5fd3e53b7de4 00:17:42.237 06:16:01 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:42.237 06:16:01 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7507f9ff-cad8-417c-bf6b-5fd3e53b7de4 00:17:42.494 06:16:02 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:42.754 06:16:02 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b2defe2b-ecf7-48f6-82db-dd897c68e77d 00:17:42.754 06:16:02 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2defe2b-ecf7-48f6-82db-dd897c68e77d 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:43.013 06:16:02 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.013 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.013 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:43.013 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:43.013 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:43.013 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:43.272 { 00:17:43.272 "name": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:43.272 "aliases": [ 00:17:43.272 "lvs/nvme0n1p0" 00:17:43.272 ], 00:17:43.272 "product_name": "Logical Volume", 00:17:43.272 "block_size": 4096, 00:17:43.272 "num_blocks": 26476544, 00:17:43.272 "uuid": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:43.272 "assigned_rate_limits": { 00:17:43.272 "rw_ios_per_sec": 0, 00:17:43.272 "rw_mbytes_per_sec": 0, 00:17:43.272 "r_mbytes_per_sec": 0, 00:17:43.272 "w_mbytes_per_sec": 0 00:17:43.272 }, 00:17:43.272 "claimed": false, 00:17:43.272 "zoned": false, 00:17:43.272 "supported_io_types": { 00:17:43.272 "read": true, 00:17:43.272 "write": true, 00:17:43.272 "unmap": true, 00:17:43.272 "flush": false, 00:17:43.272 "reset": true, 00:17:43.272 "nvme_admin": false, 00:17:43.272 "nvme_io": false, 00:17:43.272 "nvme_io_md": false, 00:17:43.272 "write_zeroes": true, 00:17:43.272 "zcopy": false, 00:17:43.272 "get_zone_info": false, 00:17:43.272 "zone_management": false, 00:17:43.272 "zone_append": false, 00:17:43.272 "compare": false, 00:17:43.272 "compare_and_write": false, 00:17:43.272 "abort": false, 00:17:43.272 "seek_hole": true, 00:17:43.272 "seek_data": true, 00:17:43.272 "copy": false, 00:17:43.272 "nvme_iov_md": false 00:17:43.272 }, 00:17:43.272 "driver_specific": { 00:17:43.272 "lvol": { 00:17:43.272 "lvol_store_uuid": "b2defe2b-ecf7-48f6-82db-dd897c68e77d", 00:17:43.272 "base_bdev": "nvme0n1", 00:17:43.272 "thin_provision": true, 00:17:43.272 "num_allocated_clusters": 0, 00:17:43.272 "snapshot": false, 00:17:43.272 "clone": false, 00:17:43.272 "esnap_clone": false 00:17:43.272 } 00:17:43.272 } 00:17:43.272 } 00:17:43.272 ]' 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:43.272 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:43.273 06:16:02 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:43.273 06:16:02 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:43.273 06:16:02 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:43.273 06:16:02 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:43.531 06:16:03 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:43.531 06:16:03 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:43.531 06:16:03 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.531 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.531 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:43.531 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:43.531 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:43.531 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:43.791 { 00:17:43.791 "name": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:43.791 "aliases": [ 00:17:43.791 "lvs/nvme0n1p0" 00:17:43.791 ], 00:17:43.791 "product_name": "Logical Volume", 00:17:43.791 "block_size": 4096, 00:17:43.791 "num_blocks": 26476544, 00:17:43.791 "uuid": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:43.791 "assigned_rate_limits": { 00:17:43.791 "rw_ios_per_sec": 0, 00:17:43.791 "rw_mbytes_per_sec": 0, 00:17:43.791 "r_mbytes_per_sec": 0, 00:17:43.791 "w_mbytes_per_sec": 0 00:17:43.791 }, 00:17:43.791 "claimed": false, 00:17:43.791 "zoned": false, 00:17:43.791 "supported_io_types": { 00:17:43.791 "read": true, 00:17:43.791 "write": true, 00:17:43.791 "unmap": true, 00:17:43.791 "flush": false, 00:17:43.791 "reset": true, 00:17:43.791 "nvme_admin": false, 00:17:43.791 "nvme_io": false, 00:17:43.791 "nvme_io_md": false, 00:17:43.791 "write_zeroes": true, 00:17:43.791 "zcopy": false, 00:17:43.791 "get_zone_info": false, 00:17:43.791 "zone_management": false, 00:17:43.791 "zone_append": false, 00:17:43.791 "compare": false, 00:17:43.791 "compare_and_write": false, 00:17:43.791 "abort": false, 00:17:43.791 "seek_hole": true, 00:17:43.791 "seek_data": true, 00:17:43.791 "copy": false, 00:17:43.791 "nvme_iov_md": false 00:17:43.791 }, 00:17:43.791 "driver_specific": { 00:17:43.791 "lvol": { 00:17:43.791 "lvol_store_uuid": "b2defe2b-ecf7-48f6-82db-dd897c68e77d", 00:17:43.791 "base_bdev": "nvme0n1", 00:17:43.791 "thin_provision": true, 00:17:43.791 "num_allocated_clusters": 0, 00:17:43.791 "snapshot": false, 00:17:43.791 "clone": false, 00:17:43.791 "esnap_clone": false 00:17:43.791 } 00:17:43.791 } 00:17:43.791 } 00:17:43.791 ]' 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:43.791 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:43.791 06:16:03 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:43.791 06:16:03 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:44.049 06:16:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:44.049 06:16:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:44.049 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:44.049 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:44.049 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:44.049 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:44.049 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cde5285-c20d-47e5-ad3a-7fe490bea780 00:17:44.307 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:44.307 { 00:17:44.307 "name": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:44.307 "aliases": [ 00:17:44.307 "lvs/nvme0n1p0" 00:17:44.307 ], 00:17:44.307 "product_name": "Logical Volume", 00:17:44.307 "block_size": 4096, 00:17:44.307 "num_blocks": 26476544, 00:17:44.307 "uuid": "7cde5285-c20d-47e5-ad3a-7fe490bea780", 00:17:44.307 "assigned_rate_limits": { 00:17:44.307 "rw_ios_per_sec": 0, 00:17:44.307 "rw_mbytes_per_sec": 0, 00:17:44.307 "r_mbytes_per_sec": 0, 00:17:44.307 "w_mbytes_per_sec": 0 00:17:44.307 }, 00:17:44.307 "claimed": false, 00:17:44.307 "zoned": false, 00:17:44.307 "supported_io_types": { 00:17:44.307 "read": true, 00:17:44.307 "write": true, 00:17:44.307 "unmap": true, 00:17:44.307 "flush": false, 00:17:44.307 "reset": true, 00:17:44.307 "nvme_admin": false, 00:17:44.307 "nvme_io": false, 00:17:44.307 "nvme_io_md": false, 00:17:44.307 "write_zeroes": true, 00:17:44.307 "zcopy": false, 00:17:44.307 "get_zone_info": false, 00:17:44.307 "zone_management": false, 00:17:44.307 "zone_append": false, 00:17:44.307 "compare": false, 00:17:44.307 "compare_and_write": false, 00:17:44.307 "abort": false, 00:17:44.307 "seek_hole": true, 00:17:44.307 "seek_data": true, 00:17:44.307 "copy": false, 00:17:44.307 "nvme_iov_md": false 00:17:44.307 }, 00:17:44.307 "driver_specific": { 00:17:44.307 "lvol": { 00:17:44.307 "lvol_store_uuid": "b2defe2b-ecf7-48f6-82db-dd897c68e77d", 00:17:44.307 "base_bdev": "nvme0n1", 00:17:44.307 "thin_provision": true, 00:17:44.307 "num_allocated_clusters": 0, 00:17:44.307 "snapshot": false, 00:17:44.307 "clone": false, 00:17:44.307 "esnap_clone": false 00:17:44.307 } 00:17:44.307 } 00:17:44.307 } 00:17:44.307 ]' 00:17:44.307 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:44.307 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:44.307 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:44.307 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:44.308 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:44.308 06:16:03 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7cde5285-c20d-47e5-ad3a-7fe490bea780 --l2p_dram_limit 10' 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:44.308 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:44.308 06:16:03 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7cde5285-c20d-47e5-ad3a-7fe490bea780 --l2p_dram_limit 10 -c nvc0n1p0 00:17:44.566 [2024-11-20 06:16:04.059313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.059364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.566 [2024-11-20 06:16:04.059378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.566 [2024-11-20 06:16:04.059385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.059435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.059444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.566 [2024-11-20 06:16:04.059452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.566 [2024-11-20 06:16:04.059458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.059479] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.566 [2024-11-20 06:16:04.060078] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.566 [2024-11-20 06:16:04.060101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.060108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.566 [2024-11-20 06:16:04.060117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:17:44.566 [2024-11-20 06:16:04.060124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.060258] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:17:44.566 [2024-11-20 06:16:04.061276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.061301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:44.566 [2024-11-20 06:16:04.061309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:44.566 [2024-11-20 06:16:04.061317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.066227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.066264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.566 [2024-11-20 06:16:04.066272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.877 ms 00:17:44.566 [2024-11-20 06:16:04.066280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.066356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.066365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.566 [2024-11-20 06:16:04.066372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:44.566 [2024-11-20 06:16:04.066381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.066426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.066436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.566 [2024-11-20 06:16:04.066442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.566 [2024-11-20 06:16:04.066451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-20 06:16:04.066468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.566 [2024-11-20 06:16:04.069463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-20 06:16:04.069504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.566 [2024-11-20 06:16:04.069515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:17:44.567 [2024-11-20 06:16:04.069522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-20 06:16:04.069555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-20 06:16:04.069561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.567 [2024-11-20 06:16:04.069569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:44.567 [2024-11-20 06:16:04.069575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-20 06:16:04.069597] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:44.567 [2024-11-20 06:16:04.069711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.567 [2024-11-20 06:16:04.069728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.567 [2024-11-20 06:16:04.069737] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.567 [2024-11-20 06:16:04.069747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.567 [2024-11-20 06:16:04.069755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.567 [2024-11-20 06:16:04.069763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.567 [2024-11-20 06:16:04.069768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.567 [2024-11-20 06:16:04.069777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.567 [2024-11-20 06:16:04.069783] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.567 [2024-11-20 06:16:04.069791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-20 06:16:04.069797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.567 [2024-11-20 06:16:04.069804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:44.567 [2024-11-20 06:16:04.069816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-20 06:16:04.069885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-20 06:16:04.069896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.567 [2024-11-20 06:16:04.069903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:44.567 [2024-11-20 06:16:04.069909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-20 06:16:04.069993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.567 [2024-11-20 06:16:04.070003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.567 [2024-11-20 06:16:04.070012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.567 [2024-11-20 06:16:04.070031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.567 [2024-11-20 06:16:04.070050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.567 [2024-11-20 06:16:04.070063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.567 [2024-11-20 06:16:04.070068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.567 [2024-11-20 06:16:04.070075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.567 [2024-11-20 06:16:04.070080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.567 [2024-11-20 06:16:04.070086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.567 [2024-11-20 06:16:04.070091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.567 [2024-11-20 06:16:04.070106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.567 [2024-11-20 06:16:04.070126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.567 [2024-11-20 06:16:04.070143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.567 [2024-11-20 06:16:04.070161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.567 [2024-11-20 06:16:04.070178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.567 [2024-11-20 06:16:04.070197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.567 [2024-11-20 06:16:04.070209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.567 [2024-11-20 06:16:04.070214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.567 [2024-11-20 06:16:04.070220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.567 [2024-11-20 06:16:04.070225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.567 [2024-11-20 06:16:04.070232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.567 [2024-11-20 06:16:04.070237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.567 [2024-11-20 06:16:04.070248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.567 [2024-11-20 06:16:04.070255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.567 [2024-11-20 06:16:04.070267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.567 [2024-11-20 06:16:04.070273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.567 [2024-11-20 06:16:04.070287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.567 [2024-11-20 06:16:04.070296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.567 [2024-11-20 06:16:04.070301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.567 [2024-11-20 06:16:04.070308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.567 [2024-11-20 06:16:04.070313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.567 [2024-11-20 06:16:04.070320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.567 [2024-11-20 06:16:04.070329] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.567 [2024-11-20 06:16:04.070337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.567 [2024-11-20 06:16:04.070352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.567 [2024-11-20 06:16:04.070358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.567 [2024-11-20 06:16:04.070365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.567 [2024-11-20 06:16:04.070371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.567 [2024-11-20 06:16:04.070378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.567 [2024-11-20 06:16:04.070384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.567 [2024-11-20 06:16:04.070391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.567 [2024-11-20 06:16:04.070397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.567 [2024-11-20 06:16:04.070405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.567 [2024-11-20 06:16:04.070436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.567 [2024-11-20 06:16:04.070445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.567 [2024-11-20 06:16:04.070458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.567 [2024-11-20 06:16:04.070463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.568 [2024-11-20 06:16:04.070471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.568 [2024-11-20 06:16:04.070477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.568 [2024-11-20 06:16:04.070484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.568 [2024-11-20 06:16:04.070503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:44.568 [2024-11-20 06:16:04.070512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.568 [2024-11-20 06:16:04.070559] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:44.568 [2024-11-20 06:16:04.070571] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:47.093 [2024-11-20 06:16:06.302740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.302835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:47.093 [2024-11-20 06:16:06.302861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2232.171 ms 00:17:47.093 [2024-11-20 06:16:06.302875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.329348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.329413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.093 [2024-11-20 06:16:06.329433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.098 ms 00:17:47.093 [2024-11-20 06:16:06.329446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.329657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.329680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:47.093 [2024-11-20 06:16:06.329695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:47.093 [2024-11-20 06:16:06.329717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.360908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.360971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.093 [2024-11-20 06:16:06.360990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.137 ms 00:17:47.093 [2024-11-20 06:16:06.361005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.361059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.361079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.093 [2024-11-20 06:16:06.361092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:47.093 [2024-11-20 06:16:06.361104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.361558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.361593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.093 [2024-11-20 06:16:06.361607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:17:47.093 [2024-11-20 06:16:06.361621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.361784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.361814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.093 [2024-11-20 06:16:06.361832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:47.093 [2024-11-20 06:16:06.361850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.376154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.376210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.093 [2024-11-20 06:16:06.376226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.275 ms 00:17:47.093 [2024-11-20 06:16:06.376240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.387846] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:47.093 [2024-11-20 06:16:06.390832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.390886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:47.093 [2024-11-20 06:16:06.390905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:17:47.093 [2024-11-20 06:16:06.390916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.460157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.460223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:47.093 [2024-11-20 06:16:06.460247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.181 ms 00:17:47.093 [2024-11-20 06:16:06.460259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.460533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.460558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:47.093 [2024-11-20 06:16:06.460580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:17:47.093 [2024-11-20 06:16:06.460593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.485530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.485591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:47.093 [2024-11-20 06:16:06.485612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.842 ms 00:17:47.093 [2024-11-20 06:16:06.485624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.510049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.510107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:47.093 [2024-11-20 06:16:06.510128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.328 ms 00:17:47.093 [2024-11-20 06:16:06.510138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.510834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.510863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:47.093 [2024-11-20 06:16:06.510880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:17:47.093 [2024-11-20 06:16:06.510895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.580142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.580199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:47.093 [2024-11-20 06:16:06.580220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.168 ms 00:17:47.093 [2024-11-20 06:16:06.580229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.605353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.605411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:47.093 [2024-11-20 06:16:06.605426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.984 ms 00:17:47.093 [2024-11-20 06:16:06.605434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.630829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.630879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:47.093 [2024-11-20 06:16:06.630892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.327 ms 00:17:47.093 [2024-11-20 06:16:06.630901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.655864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.655922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:47.093 [2024-11-20 06:16:06.655938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.896 ms 00:17:47.093 [2024-11-20 06:16:06.655947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.656008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.656019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:47.093 [2024-11-20 06:16:06.656034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:47.093 [2024-11-20 06:16:06.656042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.656136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.093 [2024-11-20 06:16:06.656147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:47.093 [2024-11-20 06:16:06.656160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:47.093 [2024-11-20 06:16:06.656168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.093 [2024-11-20 06:16:06.657160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2597.424 ms, result 0 00:17:47.093 { 00:17:47.093 "name": "ftl0", 00:17:47.093 "uuid": "65e3eda6-79b4-4218-8e10-01f7dd4585d1" 00:17:47.093 } 00:17:47.093 06:16:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:47.093 06:16:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:47.351 06:16:06 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:47.351 06:16:06 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:47.610 [2024-11-20 06:16:07.076670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.076726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:47.610 [2024-11-20 06:16:07.076739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:47.610 [2024-11-20 06:16:07.076755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.076779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.610 [2024-11-20 06:16:07.079395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.079435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:47.610 [2024-11-20 06:16:07.079449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:17:47.610 [2024-11-20 06:16:07.079458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.079766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.079792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:47.610 [2024-11-20 06:16:07.079803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:47.610 [2024-11-20 06:16:07.079811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.083070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.083093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:47.610 [2024-11-20 06:16:07.083105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:17:47.610 [2024-11-20 06:16:07.083114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.089376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.089419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:47.610 [2024-11-20 06:16:07.089434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:17:47.610 [2024-11-20 06:16:07.089443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.114312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.114368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:47.610 [2024-11-20 06:16:07.114383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.751 ms 00:17:47.610 [2024-11-20 06:16:07.114391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.129626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.129684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:47.610 [2024-11-20 06:16:07.129700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.167 ms 00:17:47.610 [2024-11-20 06:16:07.129710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.129894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.129935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:47.610 [2024-11-20 06:16:07.129947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:47.610 [2024-11-20 06:16:07.129954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.154087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.154141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:47.610 [2024-11-20 06:16:07.154154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.104 ms 00:17:47.610 [2024-11-20 06:16:07.154162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.177822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.177870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:47.610 [2024-11-20 06:16:07.177884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.600 ms 00:17:47.610 [2024-11-20 06:16:07.177893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-11-20 06:16:07.201409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-11-20 06:16:07.201460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:47.610 [2024-11-20 06:16:07.201474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:17:47.611 [2024-11-20 06:16:07.201482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.611 [2024-11-20 06:16:07.235359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.611 [2024-11-20 06:16:07.235439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:47.611 [2024-11-20 06:16:07.235461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.730 ms 00:17:47.611 [2024-11-20 06:16:07.235473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.611 [2024-11-20 06:16:07.235571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:47.611 [2024-11-20 06:16:07.235595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.235998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:47.611 [2024-11-20 06:16:07.236604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:47.612 [2024-11-20 06:16:07.236998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:47.612 [2024-11-20 06:16:07.237015] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:17:47.612 [2024-11-20 06:16:07.237028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:47.612 [2024-11-20 06:16:07.237044] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:47.612 [2024-11-20 06:16:07.237055] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:47.612 [2024-11-20 06:16:07.237072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:47.612 [2024-11-20 06:16:07.237083] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:47.612 [2024-11-20 06:16:07.237097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:47.612 [2024-11-20 06:16:07.237109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:47.612 [2024-11-20 06:16:07.237121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:47.612 [2024-11-20 06:16:07.237132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:47.612 [2024-11-20 06:16:07.237146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.612 [2024-11-20 06:16:07.237157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:47.612 [2024-11-20 06:16:07.237173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:17:47.612 [2024-11-20 06:16:07.237186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.256614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.870 [2024-11-20 06:16:07.256688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:47.870 [2024-11-20 06:16:07.256711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.320 ms 00:17:47.870 [2024-11-20 06:16:07.256725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.257259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.870 [2024-11-20 06:16:07.257295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:47.870 [2024-11-20 06:16:07.257318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:17:47.870 [2024-11-20 06:16:07.257331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.315412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.315468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.870 [2024-11-20 06:16:07.315484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.315502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.315573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.315583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.870 [2024-11-20 06:16:07.315594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.315602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.315709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.315719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.870 [2024-11-20 06:16:07.315729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.315737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.315758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.315766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.870 [2024-11-20 06:16:07.315775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.315782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.402559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.402638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.870 [2024-11-20 06:16:07.402661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.402675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.477456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.477529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.870 [2024-11-20 06:16:07.477546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.477557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.477641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.870 [2024-11-20 06:16:07.477653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.870 [2024-11-20 06:16:07.477662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.870 [2024-11-20 06:16:07.477669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.870 [2024-11-20 06:16:07.477731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.871 [2024-11-20 06:16:07.477741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.871 [2024-11-20 06:16:07.477751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.871 [2024-11-20 06:16:07.477759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.871 [2024-11-20 06:16:07.477851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.871 [2024-11-20 06:16:07.477861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.871 [2024-11-20 06:16:07.477872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.871 [2024-11-20 06:16:07.477879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.871 [2024-11-20 06:16:07.477918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.871 [2024-11-20 06:16:07.477934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:47.871 [2024-11-20 06:16:07.477943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.871 [2024-11-20 06:16:07.477951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.871 [2024-11-20 06:16:07.477990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.871 [2024-11-20 06:16:07.477999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.871 [2024-11-20 06:16:07.478007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.871 [2024-11-20 06:16:07.478015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.871 [2024-11-20 06:16:07.478060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.871 [2024-11-20 06:16:07.478069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.871 [2024-11-20 06:16:07.478079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.871 [2024-11-20 06:16:07.478086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.871 [2024-11-20 06:16:07.478207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.509 ms, result 0 00:17:47.871 true 00:17:47.871 06:16:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74417 00:17:47.871 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74417 ']' 00:17:47.871 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74417 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74417 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.129 killing process with pid 74417 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74417' 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74417 00:17:48.129 06:16:07 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74417 00:17:56.358 06:16:15 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:00.562 262144+0 records in 00:18:00.562 262144+0 records out 00:18:00.562 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12367 s, 260 MB/s 00:18:00.562 06:16:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:03.089 06:16:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:03.089 [2024-11-20 06:16:22.236437] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:18:03.089 [2024-11-20 06:16:22.236556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74640 ] 00:18:03.089 [2024-11-20 06:16:22.392896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.089 [2024-11-20 06:16:22.492116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.347 [2024-11-20 06:16:22.743168] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.347 [2024-11-20 06:16:22.743235] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.347 [2024-11-20 06:16:22.896420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.896478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.347 [2024-11-20 06:16:22.896507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:03.347 [2024-11-20 06:16:22.896515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.896567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.896577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.347 [2024-11-20 06:16:22.896588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:03.347 [2024-11-20 06:16:22.896595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.896614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.347 [2024-11-20 06:16:22.897267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.347 [2024-11-20 06:16:22.897293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.897301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.347 [2024-11-20 06:16:22.897310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:18:03.347 [2024-11-20 06:16:22.897317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.898478] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:03.347 [2024-11-20 06:16:22.910575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.910611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:03.347 [2024-11-20 06:16:22.910624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.099 ms 00:18:03.347 [2024-11-20 06:16:22.910633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.910692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.910702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:03.347 [2024-11-20 06:16:22.910710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:03.347 [2024-11-20 06:16:22.910718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.915525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.915569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.347 [2024-11-20 06:16:22.915583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:18:03.347 [2024-11-20 06:16:22.915594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.915669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.915679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.347 [2024-11-20 06:16:22.915695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:03.347 [2024-11-20 06:16:22.915702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.915743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.915751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.347 [2024-11-20 06:16:22.915759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:03.347 [2024-11-20 06:16:22.915766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.915791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.347 [2024-11-20 06:16:22.919023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.919050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.347 [2024-11-20 06:16:22.919060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:18:03.347 [2024-11-20 06:16:22.919069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.919097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.347 [2024-11-20 06:16:22.919105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.347 [2024-11-20 06:16:22.919113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:03.347 [2024-11-20 06:16:22.919120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.347 [2024-11-20 06:16:22.919139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:03.347 [2024-11-20 06:16:22.919155] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:03.347 [2024-11-20 06:16:22.919190] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:03.348 [2024-11-20 06:16:22.919208] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:03.348 [2024-11-20 06:16:22.919309] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:03.348 [2024-11-20 06:16:22.919324] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.348 [2024-11-20 06:16:22.919335] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:03.348 [2024-11-20 06:16:22.919345] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919354] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:03.348 [2024-11-20 06:16:22.919369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.348 [2024-11-20 06:16:22.919376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:03.348 [2024-11-20 06:16:22.919384] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:03.348 [2024-11-20 06:16:22.919392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.348 [2024-11-20 06:16:22.919400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.348 [2024-11-20 06:16:22.919408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:18:03.348 [2024-11-20 06:16:22.919414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.348 [2024-11-20 06:16:22.919505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.348 [2024-11-20 06:16:22.919514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.348 [2024-11-20 06:16:22.919522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:03.348 [2024-11-20 06:16:22.919528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.348 [2024-11-20 06:16:22.919630] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.348 [2024-11-20 06:16:22.919646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.348 [2024-11-20 06:16:22.919654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.348 [2024-11-20 06:16:22.919676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.348 [2024-11-20 06:16:22.919697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.348 [2024-11-20 06:16:22.919711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.348 [2024-11-20 06:16:22.919718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:03.348 [2024-11-20 06:16:22.919724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.348 [2024-11-20 06:16:22.919732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.348 [2024-11-20 06:16:22.919739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:03.348 [2024-11-20 06:16:22.919751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.348 [2024-11-20 06:16:22.919765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.348 [2024-11-20 06:16:22.919785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.348 [2024-11-20 06:16:22.919804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.348 [2024-11-20 06:16:22.919824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.348 [2024-11-20 06:16:22.919843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.348 [2024-11-20 06:16:22.919863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.348 [2024-11-20 06:16:22.919876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.348 [2024-11-20 06:16:22.919882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:03.348 [2024-11-20 06:16:22.919889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.348 [2024-11-20 06:16:22.919895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:03.348 [2024-11-20 06:16:22.919902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:03.348 [2024-11-20 06:16:22.919908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:03.348 [2024-11-20 06:16:22.919921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:03.348 [2024-11-20 06:16:22.919928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919934] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.348 [2024-11-20 06:16:22.919941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.348 [2024-11-20 06:16:22.919948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.348 [2024-11-20 06:16:22.919956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.348 [2024-11-20 06:16:22.919963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.348 [2024-11-20 06:16:22.919970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.348 [2024-11-20 06:16:22.919977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.348 [2024-11-20 06:16:22.919985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.348 [2024-11-20 06:16:22.919991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.348 [2024-11-20 06:16:22.919998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.348 [2024-11-20 06:16:22.920005] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.348 [2024-11-20 06:16:22.920014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:03.348 [2024-11-20 06:16:22.920031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:03.348 [2024-11-20 06:16:22.920038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:03.348 [2024-11-20 06:16:22.920044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:03.348 [2024-11-20 06:16:22.920051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:03.348 [2024-11-20 06:16:22.920058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:03.348 [2024-11-20 06:16:22.920065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:03.348 [2024-11-20 06:16:22.920072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:03.348 [2024-11-20 06:16:22.920079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:03.348 [2024-11-20 06:16:22.920086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:03.348 [2024-11-20 06:16:22.920122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.348 [2024-11-20 06:16:22.920132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.348 [2024-11-20 06:16:22.920147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.348 [2024-11-20 06:16:22.920155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.348 [2024-11-20 06:16:22.920162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.348 [2024-11-20 06:16:22.920169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.348 [2024-11-20 06:16:22.920176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.348 [2024-11-20 06:16:22.920183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:18:03.349 [2024-11-20 06:16:22.920190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.349 [2024-11-20 06:16:22.945766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.349 [2024-11-20 06:16:22.945817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.349 [2024-11-20 06:16:22.945829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.520 ms 00:18:03.349 [2024-11-20 06:16:22.945837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.349 [2024-11-20 06:16:22.945932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.349 [2024-11-20 06:16:22.945941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.349 [2024-11-20 06:16:22.945949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:03.349 [2024-11-20 06:16:22.945957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:22.993471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:22.993534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.606 [2024-11-20 06:16:22.993548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.451 ms 00:18:03.606 [2024-11-20 06:16:22.993556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:22.993612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:22.993622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.606 [2024-11-20 06:16:22.993633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.606 [2024-11-20 06:16:22.993640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:22.994007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:22.994033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.606 [2024-11-20 06:16:22.994043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:18:03.606 [2024-11-20 06:16:22.994051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:22.994177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:22.994192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.606 [2024-11-20 06:16:22.994200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:03.606 [2024-11-20 06:16:22.994211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:23.007222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:23.007260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.606 [2024-11-20 06:16:23.007272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.991 ms 00:18:03.606 [2024-11-20 06:16:23.007280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:23.019475] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:03.606 [2024-11-20 06:16:23.019524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:03.606 [2024-11-20 06:16:23.019537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:23.019546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:03.606 [2024-11-20 06:16:23.019557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.149 ms 00:18:03.606 [2024-11-20 06:16:23.019564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.606 [2024-11-20 06:16:23.043934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.606 [2024-11-20 06:16:23.043992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:03.606 [2024-11-20 06:16:23.044004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.265 ms 00:18:03.606 [2024-11-20 06:16:23.044012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.056217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.056270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:03.607 [2024-11-20 06:16:23.056282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.140 ms 00:18:03.607 [2024-11-20 06:16:23.056290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.067970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.068011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:03.607 [2024-11-20 06:16:23.068022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.636 ms 00:18:03.607 [2024-11-20 06:16:23.068029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.068678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.068703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:03.607 [2024-11-20 06:16:23.068712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:18:03.607 [2024-11-20 06:16:23.068720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.124015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.124074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:03.607 [2024-11-20 06:16:23.124087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.272 ms 00:18:03.607 [2024-11-20 06:16:23.124101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.134824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:03.607 [2024-11-20 06:16:23.137410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.137441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.607 [2024-11-20 06:16:23.137454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.260 ms 00:18:03.607 [2024-11-20 06:16:23.137463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.137580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.137604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:03.607 [2024-11-20 06:16:23.137612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:03.607 [2024-11-20 06:16:23.137620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.137696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.137712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:03.607 [2024-11-20 06:16:23.137720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:03.607 [2024-11-20 06:16:23.137728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.137747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.137755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:03.607 [2024-11-20 06:16:23.137763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:03.607 [2024-11-20 06:16:23.137770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.137799] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:03.607 [2024-11-20 06:16:23.137808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.137818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:03.607 [2024-11-20 06:16:23.137826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:03.607 [2024-11-20 06:16:23.137832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.160645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.160696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:03.607 [2024-11-20 06:16:23.160709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.796 ms 00:18:03.607 [2024-11-20 06:16:23.160716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.160795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.607 [2024-11-20 06:16:23.160804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:03.607 [2024-11-20 06:16:23.160812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:03.607 [2024-11-20 06:16:23.160820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.607 [2024-11-20 06:16:23.161715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.880 ms, result 0 00:18:04.547  [2024-11-20T06:16:25.551Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-20T06:16:26.521Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-20T06:16:27.453Z] Copying: 138/1024 [MB] (45 MBps) [2024-11-20T06:16:28.386Z] Copying: 183/1024 [MB] (45 MBps) [2024-11-20T06:16:29.318Z] Copying: 229/1024 [MB] (46 MBps) [2024-11-20T06:16:30.250Z] Copying: 282/1024 [MB] (52 MBps) [2024-11-20T06:16:31.183Z] Copying: 327/1024 [MB] (45 MBps) [2024-11-20T06:16:32.553Z] Copying: 374/1024 [MB] (46 MBps) [2024-11-20T06:16:33.486Z] Copying: 420/1024 [MB] (46 MBps) [2024-11-20T06:16:34.420Z] Copying: 467/1024 [MB] (46 MBps) [2024-11-20T06:16:35.352Z] Copying: 513/1024 [MB] (45 MBps) [2024-11-20T06:16:36.286Z] Copying: 556/1024 [MB] (42 MBps) [2024-11-20T06:16:37.251Z] Copying: 600/1024 [MB] (43 MBps) [2024-11-20T06:16:38.184Z] Copying: 645/1024 [MB] (45 MBps) [2024-11-20T06:16:39.556Z] Copying: 690/1024 [MB] (44 MBps) [2024-11-20T06:16:40.492Z] Copying: 733/1024 [MB] (43 MBps) [2024-11-20T06:16:41.428Z] Copying: 778/1024 [MB] (44 MBps) [2024-11-20T06:16:42.362Z] Copying: 828/1024 [MB] (49 MBps) [2024-11-20T06:16:43.294Z] Copying: 873/1024 [MB] (45 MBps) [2024-11-20T06:16:44.227Z] Copying: 914/1024 [MB] (41 MBps) [2024-11-20T06:16:45.616Z] Copying: 951/1024 [MB] (36 MBps) [2024-11-20T06:16:46.183Z] Copying: 995/1024 [MB] (44 MBps) [2024-11-20T06:16:46.183Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-20 06:16:45.877024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.877141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.550 [2024-11-20 06:16:45.877157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:26.550 [2024-11-20 06:16:45.877165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.877185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.550 [2024-11-20 06:16:45.879371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.879402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.550 [2024-11-20 06:16:45.879412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:18:26.550 [2024-11-20 06:16:45.879424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.880934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.880960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.550 [2024-11-20 06:16:45.880969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:18:26.550 [2024-11-20 06:16:45.880975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.892711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.892766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.550 [2024-11-20 06:16:45.892778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.718 ms 00:18:26.550 [2024-11-20 06:16:45.892784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.898085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.898163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.550 [2024-11-20 06:16:45.898181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.255 ms 00:18:26.550 [2024-11-20 06:16:45.898194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.924278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.924332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.550 [2024-11-20 06:16:45.924344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.988 ms 00:18:26.550 [2024-11-20 06:16:45.924351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.936855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.936916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.550 [2024-11-20 06:16:45.936929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.454 ms 00:18:26.550 [2024-11-20 06:16:45.936938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.937055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.937063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.550 [2024-11-20 06:16:45.937078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:26.550 [2024-11-20 06:16:45.937084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.958339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.958397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.550 [2024-11-20 06:16:45.958409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.241 ms 00:18:26.550 [2024-11-20 06:16:45.958415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.978928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.978979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.550 [2024-11-20 06:16:45.979002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.452 ms 00:18:26.550 [2024-11-20 06:16:45.979008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:45.998829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:45.998875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.550 [2024-11-20 06:16:45.998887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.767 ms 00:18:26.550 [2024-11-20 06:16:45.998894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:46.018338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.550 [2024-11-20 06:16:46.018381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.550 [2024-11-20 06:16:46.018392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.366 ms 00:18:26.550 [2024-11-20 06:16:46.018398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.550 [2024-11-20 06:16:46.018443] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.550 [2024-11-20 06:16:46.018457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.550 [2024-11-20 06:16:46.018551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.018994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.551 [2024-11-20 06:16:46.019114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.551 [2024-11-20 06:16:46.019132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:18:26.551 [2024-11-20 06:16:46.019142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.552 [2024-11-20 06:16:46.019148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.552 [2024-11-20 06:16:46.019154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.552 [2024-11-20 06:16:46.019161] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.552 [2024-11-20 06:16:46.019166] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.552 [2024-11-20 06:16:46.019173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.552 [2024-11-20 06:16:46.019179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.552 [2024-11-20 06:16:46.019190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.552 [2024-11-20 06:16:46.019195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.552 [2024-11-20 06:16:46.019201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.552 [2024-11-20 06:16:46.019207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.552 [2024-11-20 06:16:46.019215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:18:26.552 [2024-11-20 06:16:46.019220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.029399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.552 [2024-11-20 06:16:46.029438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.552 [2024-11-20 06:16:46.029449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.142 ms 00:18:26.552 [2024-11-20 06:16:46.029456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.029764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.552 [2024-11-20 06:16:46.029775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.552 [2024-11-20 06:16:46.029791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:26.552 [2024-11-20 06:16:46.029797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.056403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.056453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.552 [2024-11-20 06:16:46.056462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.056468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.056536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.056544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.552 [2024-11-20 06:16:46.056550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.056557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.056616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.056624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.552 [2024-11-20 06:16:46.056631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.056638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.056650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.056656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.552 [2024-11-20 06:16:46.056663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.056669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.120370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.120414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.552 [2024-11-20 06:16:46.120425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.120431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.552 [2024-11-20 06:16:46.173415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.552 [2024-11-20 06:16:46.173511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.552 [2024-11-20 06:16:46.173587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.552 [2024-11-20 06:16:46.173684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.552 [2024-11-20 06:16:46.173734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.552 [2024-11-20 06:16:46.173785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.552 [2024-11-20 06:16:46.173833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.552 [2024-11-20 06:16:46.173839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.552 [2024-11-20 06:16:46.173846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.552 [2024-11-20 06:16:46.173942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 296.892 ms, result 0 00:18:28.452 00:18:28.452 00:18:28.452 06:16:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:28.452 [2024-11-20 06:16:47.855652] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:18:28.452 [2024-11-20 06:16:47.855772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74903 ] 00:18:28.452 [2024-11-20 06:16:48.014336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.711 [2024-11-20 06:16:48.115477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.969 [2024-11-20 06:16:48.370564] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:28.969 [2024-11-20 06:16:48.370633] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:28.969 [2024-11-20 06:16:48.524033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.969 [2024-11-20 06:16:48.524090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.969 [2024-11-20 06:16:48.524108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.969 [2024-11-20 06:16:48.524116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.969 [2024-11-20 06:16:48.524171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.969 [2024-11-20 06:16:48.524181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.969 [2024-11-20 06:16:48.524192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:28.969 [2024-11-20 06:16:48.524199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.969 [2024-11-20 06:16:48.524218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.969 [2024-11-20 06:16:48.524915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.969 [2024-11-20 06:16:48.524938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.969 [2024-11-20 06:16:48.524946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.969 [2024-11-20 06:16:48.524954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:18:28.969 [2024-11-20 06:16:48.524962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.969 [2024-11-20 06:16:48.526077] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:28.970 [2024-11-20 06:16:48.538398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.538447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:28.970 [2024-11-20 06:16:48.538459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.322 ms 00:18:28.970 [2024-11-20 06:16:48.538467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.538552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.538562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:28.970 [2024-11-20 06:16:48.538571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:28.970 [2024-11-20 06:16:48.538578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.543953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.544003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.970 [2024-11-20 06:16:48.544019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.297 ms 00:18:28.970 [2024-11-20 06:16:48.544031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.544133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.544146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.970 [2024-11-20 06:16:48.544154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:28.970 [2024-11-20 06:16:48.544161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.544212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.544221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.970 [2024-11-20 06:16:48.544230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:28.970 [2024-11-20 06:16:48.544236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.544261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.970 [2024-11-20 06:16:48.547609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.547636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.970 [2024-11-20 06:16:48.547646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.356 ms 00:18:28.970 [2024-11-20 06:16:48.547655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.547686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.547694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.970 [2024-11-20 06:16:48.547702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:28.970 [2024-11-20 06:16:48.547709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.547730] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:28.970 [2024-11-20 06:16:48.547747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:28.970 [2024-11-20 06:16:48.547781] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:28.970 [2024-11-20 06:16:48.547798] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:28.970 [2024-11-20 06:16:48.547899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.970 [2024-11-20 06:16:48.547914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.970 [2024-11-20 06:16:48.547925] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:28.970 [2024-11-20 06:16:48.547935] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.970 [2024-11-20 06:16:48.547943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.970 [2024-11-20 06:16:48.547952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.970 [2024-11-20 06:16:48.547959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.970 [2024-11-20 06:16:48.547966] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.970 [2024-11-20 06:16:48.547975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.970 [2024-11-20 06:16:48.547982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.547989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.970 [2024-11-20 06:16:48.547996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:28.970 [2024-11-20 06:16:48.548003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.548084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.970 [2024-11-20 06:16:48.548092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.970 [2024-11-20 06:16:48.548099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:28.970 [2024-11-20 06:16:48.548105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.970 [2024-11-20 06:16:48.548225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.970 [2024-11-20 06:16:48.548240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.970 [2024-11-20 06:16:48.548248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.970 [2024-11-20 06:16:48.548272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.970 [2024-11-20 06:16:48.548293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.970 [2024-11-20 06:16:48.548306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.970 [2024-11-20 06:16:48.548312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.970 [2024-11-20 06:16:48.548318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.970 [2024-11-20 06:16:48.548325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.970 [2024-11-20 06:16:48.548331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.970 [2024-11-20 06:16:48.548344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.970 [2024-11-20 06:16:48.548356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.970 [2024-11-20 06:16:48.548377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.970 [2024-11-20 06:16:48.548396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.970 [2024-11-20 06:16:48.548416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.970 [2024-11-20 06:16:48.548434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.970 [2024-11-20 06:16:48.548452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.970 [2024-11-20 06:16:48.548465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.970 [2024-11-20 06:16:48.548471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.970 [2024-11-20 06:16:48.548477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.970 [2024-11-20 06:16:48.548485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.970 [2024-11-20 06:16:48.548504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.970 [2024-11-20 06:16:48.548511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.970 [2024-11-20 06:16:48.548524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.970 [2024-11-20 06:16:48.548531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.970 [2024-11-20 06:16:48.548546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.970 [2024-11-20 06:16:48.548553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.970 [2024-11-20 06:16:48.548560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.970 [2024-11-20 06:16:48.548567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.970 [2024-11-20 06:16:48.548574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.970 [2024-11-20 06:16:48.548580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.970 [2024-11-20 06:16:48.548587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.970 [2024-11-20 06:16:48.548593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.970 [2024-11-20 06:16:48.548600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.971 [2024-11-20 06:16:48.548608] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.971 [2024-11-20 06:16:48.548617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.971 [2024-11-20 06:16:48.548633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.971 [2024-11-20 06:16:48.548640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.971 [2024-11-20 06:16:48.548647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.971 [2024-11-20 06:16:48.548653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.971 [2024-11-20 06:16:48.548660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.971 [2024-11-20 06:16:48.548667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.971 [2024-11-20 06:16:48.548673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.971 [2024-11-20 06:16:48.548681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.971 [2024-11-20 06:16:48.548687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.971 [2024-11-20 06:16:48.548724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.971 [2024-11-20 06:16:48.548735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.971 [2024-11-20 06:16:48.548750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.971 [2024-11-20 06:16:48.548757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.971 [2024-11-20 06:16:48.548764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.971 [2024-11-20 06:16:48.548771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.971 [2024-11-20 06:16:48.548779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.971 [2024-11-20 06:16:48.548786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:18:28.971 [2024-11-20 06:16:48.548793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.971 [2024-11-20 06:16:48.574939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.971 [2024-11-20 06:16:48.574982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:28.971 [2024-11-20 06:16:48.574994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.092 ms 00:18:28.971 [2024-11-20 06:16:48.575002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.971 [2024-11-20 06:16:48.575099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.971 [2024-11-20 06:16:48.575107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:28.971 [2024-11-20 06:16:48.575115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:28.971 [2024-11-20 06:16:48.575122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.617721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.617779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.230 [2024-11-20 06:16:48.617793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.532 ms 00:18:29.230 [2024-11-20 06:16:48.617801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.617862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.617871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.230 [2024-11-20 06:16:48.617885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:29.230 [2024-11-20 06:16:48.617892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.618325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.618349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.230 [2024-11-20 06:16:48.618359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:18:29.230 [2024-11-20 06:16:48.618366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.618514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.618525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.230 [2024-11-20 06:16:48.618533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:29.230 [2024-11-20 06:16:48.618545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.632189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.632233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.230 [2024-11-20 06:16:48.632247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:18:29.230 [2024-11-20 06:16:48.632255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.644769] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:29.230 [2024-11-20 06:16:48.644823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:29.230 [2024-11-20 06:16:48.644837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.644847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:29.230 [2024-11-20 06:16:48.644857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:18:29.230 [2024-11-20 06:16:48.644864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.669687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.669742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:29.230 [2024-11-20 06:16:48.669754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.761 ms 00:18:29.230 [2024-11-20 06:16:48.669762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.681895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.681945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:29.230 [2024-11-20 06:16:48.681957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:18:29.230 [2024-11-20 06:16:48.681964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.694423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.694476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:29.230 [2024-11-20 06:16:48.694489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.406 ms 00:18:29.230 [2024-11-20 06:16:48.694505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.695210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.695234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.230 [2024-11-20 06:16:48.695244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:18:29.230 [2024-11-20 06:16:48.695255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.752451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.752524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:29.230 [2024-11-20 06:16:48.752545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.177 ms 00:18:29.230 [2024-11-20 06:16:48.752553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.763823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:29.230 [2024-11-20 06:16:48.766700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.766740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.230 [2024-11-20 06:16:48.766754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:18:29.230 [2024-11-20 06:16:48.766763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.766895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.766906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:29.230 [2024-11-20 06:16:48.766915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:29.230 [2024-11-20 06:16:48.766926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.766990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.766999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.230 [2024-11-20 06:16:48.767008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:29.230 [2024-11-20 06:16:48.767015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.767033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.767041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.230 [2024-11-20 06:16:48.767048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.230 [2024-11-20 06:16:48.767055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.767087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:29.230 [2024-11-20 06:16:48.767097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.767105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:29.230 [2024-11-20 06:16:48.767112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:29.230 [2024-11-20 06:16:48.767120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.791401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.791452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.230 [2024-11-20 06:16:48.791465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.262 ms 00:18:29.230 [2024-11-20 06:16:48.791481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.791580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.230 [2024-11-20 06:16:48.791591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.230 [2024-11-20 06:16:48.791599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:29.230 [2024-11-20 06:16:48.791607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.230 [2024-11-20 06:16:48.792640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.138 ms, result 0 00:18:30.602  [2024-11-20T06:16:51.167Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-20T06:16:52.102Z] Copying: 93/1024 [MB] (47 MBps) [2024-11-20T06:16:53.036Z] Copying: 140/1024 [MB] (46 MBps) [2024-11-20T06:16:53.970Z] Copying: 186/1024 [MB] (46 MBps) [2024-11-20T06:16:55.349Z] Copying: 231/1024 [MB] (44 MBps) [2024-11-20T06:16:56.304Z] Copying: 278/1024 [MB] (47 MBps) [2024-11-20T06:16:57.233Z] Copying: 327/1024 [MB] (48 MBps) [2024-11-20T06:16:58.173Z] Copying: 374/1024 [MB] (47 MBps) [2024-11-20T06:16:59.107Z] Copying: 425/1024 [MB] (50 MBps) [2024-11-20T06:17:00.040Z] Copying: 473/1024 [MB] (47 MBps) [2024-11-20T06:17:00.975Z] Copying: 520/1024 [MB] (46 MBps) [2024-11-20T06:17:02.347Z] Copying: 568/1024 [MB] (48 MBps) [2024-11-20T06:17:03.281Z] Copying: 614/1024 [MB] (45 MBps) [2024-11-20T06:17:04.214Z] Copying: 660/1024 [MB] (46 MBps) [2024-11-20T06:17:05.145Z] Copying: 708/1024 [MB] (47 MBps) [2024-11-20T06:17:06.084Z] Copying: 752/1024 [MB] (44 MBps) [2024-11-20T06:17:07.018Z] Copying: 799/1024 [MB] (46 MBps) [2024-11-20T06:17:08.393Z] Copying: 842/1024 [MB] (43 MBps) [2024-11-20T06:17:09.327Z] Copying: 884/1024 [MB] (42 MBps) [2024-11-20T06:17:10.263Z] Copying: 932/1024 [MB] (47 MBps) [2024-11-20T06:17:11.196Z] Copying: 972/1024 [MB] (40 MBps) [2024-11-20T06:17:11.196Z] Copying: 1016/1024 [MB] (43 MBps) [2024-11-20T06:17:12.570Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-11-20 06:17:12.520198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.937 [2024-11-20 06:17:12.520263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:52.937 [2024-11-20 06:17:12.520277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:52.937 [2024-11-20 06:17:12.520286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.937 [2024-11-20 06:17:12.520307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:52.937 [2024-11-20 06:17:12.523841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.937 [2024-11-20 06:17:12.523876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:52.937 [2024-11-20 06:17:12.523893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.519 ms 00:18:52.937 [2024-11-20 06:17:12.523902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.937 [2024-11-20 06:17:12.524211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.937 [2024-11-20 06:17:12.524227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:52.937 [2024-11-20 06:17:12.524236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:52.937 [2024-11-20 06:17:12.524243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.937 [2024-11-20 06:17:12.527688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.937 [2024-11-20 06:17:12.527709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:52.937 [2024-11-20 06:17:12.527718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.432 ms 00:18:52.937 [2024-11-20 06:17:12.527725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.937 [2024-11-20 06:17:12.533864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.937 [2024-11-20 06:17:12.533903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:52.937 [2024-11-20 06:17:12.533915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:18:52.937 [2024-11-20 06:17:12.533923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.938 [2024-11-20 06:17:12.561704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.938 [2024-11-20 06:17:12.561772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:52.938 [2024-11-20 06:17:12.561788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.710 ms 00:18:52.938 [2024-11-20 06:17:12.561798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.197 [2024-11-20 06:17:12.579574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.197 [2024-11-20 06:17:12.579633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:53.198 [2024-11-20 06:17:12.579647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.730 ms 00:18:53.198 [2024-11-20 06:17:12.579655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.579814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.198 [2024-11-20 06:17:12.579834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:53.198 [2024-11-20 06:17:12.579843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:53.198 [2024-11-20 06:17:12.579850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.605737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.198 [2024-11-20 06:17:12.605793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:53.198 [2024-11-20 06:17:12.605806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.870 ms 00:18:53.198 [2024-11-20 06:17:12.605814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.629944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.198 [2024-11-20 06:17:12.630005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:53.198 [2024-11-20 06:17:12.630017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.095 ms 00:18:53.198 [2024-11-20 06:17:12.630024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.653055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.198 [2024-11-20 06:17:12.653105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:53.198 [2024-11-20 06:17:12.653117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.000 ms 00:18:53.198 [2024-11-20 06:17:12.653124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.676302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.198 [2024-11-20 06:17:12.676348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:53.198 [2024-11-20 06:17:12.676360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.117 ms 00:18:53.198 [2024-11-20 06:17:12.676368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.198 [2024-11-20 06:17:12.676399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:53.198 [2024-11-20 06:17:12.676414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:53.198 [2024-11-20 06:17:12.676930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.676995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:53.199 [2024-11-20 06:17:12.677163] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:53.199 [2024-11-20 06:17:12.677173] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:18:53.199 [2024-11-20 06:17:12.677181] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:53.199 [2024-11-20 06:17:12.677188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:53.199 [2024-11-20 06:17:12.677194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:53.199 [2024-11-20 06:17:12.677202] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:53.199 [2024-11-20 06:17:12.677208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:53.199 [2024-11-20 06:17:12.677216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:53.199 [2024-11-20 06:17:12.677229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:53.199 [2024-11-20 06:17:12.677235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:53.199 [2024-11-20 06:17:12.677241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:53.199 [2024-11-20 06:17:12.677249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.199 [2024-11-20 06:17:12.677256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:53.199 [2024-11-20 06:17:12.677265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:18:53.199 [2024-11-20 06:17:12.677272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.689777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.199 [2024-11-20 06:17:12.689816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:53.199 [2024-11-20 06:17:12.689828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.480 ms 00:18:53.199 [2024-11-20 06:17:12.689836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.690205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.199 [2024-11-20 06:17:12.690228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:53.199 [2024-11-20 06:17:12.690237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:53.199 [2024-11-20 06:17:12.690248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.722835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.199 [2024-11-20 06:17:12.722885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.199 [2024-11-20 06:17:12.722897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.199 [2024-11-20 06:17:12.722904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.722967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.199 [2024-11-20 06:17:12.722975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.199 [2024-11-20 06:17:12.722983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.199 [2024-11-20 06:17:12.722995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.723076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.199 [2024-11-20 06:17:12.723086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.199 [2024-11-20 06:17:12.723094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.199 [2024-11-20 06:17:12.723101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.723115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.199 [2024-11-20 06:17:12.723123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.199 [2024-11-20 06:17:12.723130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.199 [2024-11-20 06:17:12.723137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.199 [2024-11-20 06:17:12.800463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.199 [2024-11-20 06:17:12.800550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.199 [2024-11-20 06:17:12.800563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.199 [2024-11-20 06:17:12.800570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.458 [2024-11-20 06:17:12.863200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.458 [2024-11-20 06:17:12.863280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.458 [2024-11-20 06:17:12.863351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.458 [2024-11-20 06:17:12.863459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:53.458 [2024-11-20 06:17:12.863531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.458 [2024-11-20 06:17:12.863589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.458 [2024-11-20 06:17:12.863676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.458 [2024-11-20 06:17:12.863684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.458 [2024-11-20 06:17:12.863691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.458 [2024-11-20 06:17:12.863804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.575 ms, result 0 00:18:54.024 00:18:54.024 00:18:54.024 06:17:13 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:56.615 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:56.615 06:17:15 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:56.615 [2024-11-20 06:17:15.797849] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:18:56.615 [2024-11-20 06:17:15.797977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75195 ] 00:18:56.615 [2024-11-20 06:17:15.958237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.615 [2024-11-20 06:17:16.062442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.875 [2024-11-20 06:17:16.340734] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.875 [2024-11-20 06:17:16.340806] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.875 [2024-11-20 06:17:16.498967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.875 [2024-11-20 06:17:16.499027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:56.875 [2024-11-20 06:17:16.499044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.875 [2024-11-20 06:17:16.499052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.875 [2024-11-20 06:17:16.499109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.875 [2024-11-20 06:17:16.499119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.875 [2024-11-20 06:17:16.499129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:56.875 [2024-11-20 06:17:16.499136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.875 [2024-11-20 06:17:16.499155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:56.875 [2024-11-20 06:17:16.499905] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:56.875 [2024-11-20 06:17:16.499933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.875 [2024-11-20 06:17:16.499942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.875 [2024-11-20 06:17:16.499950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:18:56.875 [2024-11-20 06:17:16.499957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.875 [2024-11-20 06:17:16.501102] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:57.137 [2024-11-20 06:17:16.514113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.514171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:57.137 [2024-11-20 06:17:16.514183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.011 ms 00:18:57.137 [2024-11-20 06:17:16.514191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.514295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.514305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:57.137 [2024-11-20 06:17:16.514314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:57.137 [2024-11-20 06:17:16.514322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.520243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.520291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.137 [2024-11-20 06:17:16.520303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:18:57.137 [2024-11-20 06:17:16.520316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.520397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.520406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.137 [2024-11-20 06:17:16.520415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:57.137 [2024-11-20 06:17:16.520423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.520471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.520480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:57.137 [2024-11-20 06:17:16.520488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:57.137 [2024-11-20 06:17:16.520508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.520534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:57.137 [2024-11-20 06:17:16.524119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.524155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.137 [2024-11-20 06:17:16.524165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:18:57.137 [2024-11-20 06:17:16.524176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.524211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.524219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:57.137 [2024-11-20 06:17:16.524227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:57.137 [2024-11-20 06:17:16.524234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.524259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:57.137 [2024-11-20 06:17:16.524277] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:57.137 [2024-11-20 06:17:16.524313] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:57.137 [2024-11-20 06:17:16.524330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:57.137 [2024-11-20 06:17:16.524432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:57.137 [2024-11-20 06:17:16.524442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:57.137 [2024-11-20 06:17:16.524452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:57.137 [2024-11-20 06:17:16.524462] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:57.137 [2024-11-20 06:17:16.524471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:57.137 [2024-11-20 06:17:16.524479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:57.137 [2024-11-20 06:17:16.524487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:57.137 [2024-11-20 06:17:16.524505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:57.137 [2024-11-20 06:17:16.524514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:57.137 [2024-11-20 06:17:16.524522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.524529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:57.137 [2024-11-20 06:17:16.524537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:18:57.137 [2024-11-20 06:17:16.524544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.524626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.137 [2024-11-20 06:17:16.524634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:57.137 [2024-11-20 06:17:16.524642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:57.137 [2024-11-20 06:17:16.524648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.137 [2024-11-20 06:17:16.524788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:57.137 [2024-11-20 06:17:16.524811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:57.137 [2024-11-20 06:17:16.524819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.137 [2024-11-20 06:17:16.524827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.137 [2024-11-20 06:17:16.524835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:57.137 [2024-11-20 06:17:16.524842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:57.137 [2024-11-20 06:17:16.524849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:57.137 [2024-11-20 06:17:16.524855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:57.137 [2024-11-20 06:17:16.524862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:57.137 [2024-11-20 06:17:16.524868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.137 [2024-11-20 06:17:16.524875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:57.137 [2024-11-20 06:17:16.524882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:57.137 [2024-11-20 06:17:16.524888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.137 [2024-11-20 06:17:16.524894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:57.137 [2024-11-20 06:17:16.524903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:57.137 [2024-11-20 06:17:16.524916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.138 [2024-11-20 06:17:16.524922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:57.138 [2024-11-20 06:17:16.524929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:57.138 [2024-11-20 06:17:16.524935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.138 [2024-11-20 06:17:16.524941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:57.138 [2024-11-20 06:17:16.524948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:57.138 [2024-11-20 06:17:16.524955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.138 [2024-11-20 06:17:16.524961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:57.138 [2024-11-20 06:17:16.524968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:57.138 [2024-11-20 06:17:16.524974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.138 [2024-11-20 06:17:16.524981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:57.138 [2024-11-20 06:17:16.524987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:57.138 [2024-11-20 06:17:16.524993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.138 [2024-11-20 06:17:16.525000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:57.138 [2024-11-20 06:17:16.525007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:57.138 [2024-11-20 06:17:16.525013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:57.138 [2024-11-20 06:17:16.525019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:57.138 [2024-11-20 06:17:16.525025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:57.138 [2024-11-20 06:17:16.525031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.138 [2024-11-20 06:17:16.525038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:57.138 [2024-11-20 06:17:16.525044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:57.138 [2024-11-20 06:17:16.525050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.138 [2024-11-20 06:17:16.525057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:57.138 [2024-11-20 06:17:16.525064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:57.138 [2024-11-20 06:17:16.525070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.138 [2024-11-20 06:17:16.525076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:57.138 [2024-11-20 06:17:16.525083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:57.138 [2024-11-20 06:17:16.525089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.138 [2024-11-20 06:17:16.525095] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:57.138 [2024-11-20 06:17:16.525103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:57.138 [2024-11-20 06:17:16.525110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.138 [2024-11-20 06:17:16.525117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.138 [2024-11-20 06:17:16.525125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:57.138 [2024-11-20 06:17:16.525132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:57.138 [2024-11-20 06:17:16.525139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:57.138 [2024-11-20 06:17:16.525146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:57.138 [2024-11-20 06:17:16.525152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:57.138 [2024-11-20 06:17:16.525158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:57.138 [2024-11-20 06:17:16.525166] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:57.138 [2024-11-20 06:17:16.525175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:57.138 [2024-11-20 06:17:16.525190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:57.138 [2024-11-20 06:17:16.525197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:57.138 [2024-11-20 06:17:16.525204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:57.138 [2024-11-20 06:17:16.525211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:57.138 [2024-11-20 06:17:16.525218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:57.138 [2024-11-20 06:17:16.525225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:57.138 [2024-11-20 06:17:16.525231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:57.138 [2024-11-20 06:17:16.525238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:57.138 [2024-11-20 06:17:16.525245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:57.138 [2024-11-20 06:17:16.525279] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:57.138 [2024-11-20 06:17:16.525289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:57.138 [2024-11-20 06:17:16.525303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:57.138 [2024-11-20 06:17:16.525310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:57.138 [2024-11-20 06:17:16.525317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:57.138 [2024-11-20 06:17:16.525324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.525331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:57.138 [2024-11-20 06:17:16.525338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:18:57.138 [2024-11-20 06:17:16.525346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.551558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.551608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.138 [2024-11-20 06:17:16.551620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.169 ms 00:18:57.138 [2024-11-20 06:17:16.551628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.551727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.551735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.138 [2024-11-20 06:17:16.551743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:57.138 [2024-11-20 06:17:16.551750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.595606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.595662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.138 [2024-11-20 06:17:16.595676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.788 ms 00:18:57.138 [2024-11-20 06:17:16.595684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.595741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.595751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.138 [2024-11-20 06:17:16.595763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:57.138 [2024-11-20 06:17:16.595770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.596170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.596197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.138 [2024-11-20 06:17:16.596207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:18:57.138 [2024-11-20 06:17:16.596214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.596345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.596361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.138 [2024-11-20 06:17:16.596370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:57.138 [2024-11-20 06:17:16.596382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.609486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.138 [2024-11-20 06:17:16.609543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.138 [2024-11-20 06:17:16.609558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:18:57.138 [2024-11-20 06:17:16.609566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.138 [2024-11-20 06:17:16.622888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:57.138 [2024-11-20 06:17:16.622948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:57.139 [2024-11-20 06:17:16.622961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.622970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:57.139 [2024-11-20 06:17:16.622981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.281 ms 00:18:57.139 [2024-11-20 06:17:16.622988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.648035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.648100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:57.139 [2024-11-20 06:17:16.648115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.985 ms 00:18:57.139 [2024-11-20 06:17:16.648124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.661239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.661289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:57.139 [2024-11-20 06:17:16.661301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:18:57.139 [2024-11-20 06:17:16.661309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.673126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.673171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:57.139 [2024-11-20 06:17:16.673182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.767 ms 00:18:57.139 [2024-11-20 06:17:16.673189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.673843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.673869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.139 [2024-11-20 06:17:16.673878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:18:57.139 [2024-11-20 06:17:16.673888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.731952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.732020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:57.139 [2024-11-20 06:17:16.732040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.044 ms 00:18:57.139 [2024-11-20 06:17:16.732049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.743050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:57.139 [2024-11-20 06:17:16.745946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.745981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.139 [2024-11-20 06:17:16.745994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.836 ms 00:18:57.139 [2024-11-20 06:17:16.746003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.746120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.746132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:57.139 [2024-11-20 06:17:16.746141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:57.139 [2024-11-20 06:17:16.746150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.746215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.746225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.139 [2024-11-20 06:17:16.746233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:57.139 [2024-11-20 06:17:16.746241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.746261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.746270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.139 [2024-11-20 06:17:16.746277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.139 [2024-11-20 06:17:16.746284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.139 [2024-11-20 06:17:16.746316] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:57.139 [2024-11-20 06:17:16.746326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.139 [2024-11-20 06:17:16.746334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:57.139 [2024-11-20 06:17:16.746341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:57.139 [2024-11-20 06:17:16.746349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.398 [2024-11-20 06:17:16.770949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.398 [2024-11-20 06:17:16.770996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.398 [2024-11-20 06:17:16.771009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.582 ms 00:18:57.398 [2024-11-20 06:17:16.771022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.398 [2024-11-20 06:17:16.771107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.398 [2024-11-20 06:17:16.771118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.398 [2024-11-20 06:17:16.771126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:57.398 [2024-11-20 06:17:16.771134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.398 [2024-11-20 06:17:16.772257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.874 ms, result 0 00:18:58.331  [2024-11-20T06:17:18.898Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-20T06:17:19.892Z] Copying: 88/1024 [MB] (45 MBps) [2024-11-20T06:17:20.826Z] Copying: 134/1024 [MB] (45 MBps) [2024-11-20T06:17:22.202Z] Copying: 179/1024 [MB] (45 MBps) [2024-11-20T06:17:23.134Z] Copying: 225/1024 [MB] (46 MBps) [2024-11-20T06:17:24.068Z] Copying: 271/1024 [MB] (45 MBps) [2024-11-20T06:17:24.999Z] Copying: 317/1024 [MB] (45 MBps) [2024-11-20T06:17:25.931Z] Copying: 362/1024 [MB] (45 MBps) [2024-11-20T06:17:26.863Z] Copying: 409/1024 [MB] (46 MBps) [2024-11-20T06:17:27.798Z] Copying: 455/1024 [MB] (45 MBps) [2024-11-20T06:17:29.193Z] Copying: 499/1024 [MB] (44 MBps) [2024-11-20T06:17:30.126Z] Copying: 540/1024 [MB] (41 MBps) [2024-11-20T06:17:31.060Z] Copying: 584/1024 [MB] (43 MBps) [2024-11-20T06:17:31.993Z] Copying: 630/1024 [MB] (45 MBps) [2024-11-20T06:17:32.927Z] Copying: 676/1024 [MB] (46 MBps) [2024-11-20T06:17:33.948Z] Copying: 722/1024 [MB] (46 MBps) [2024-11-20T06:17:34.879Z] Copying: 767/1024 [MB] (44 MBps) [2024-11-20T06:17:35.814Z] Copying: 812/1024 [MB] (45 MBps) [2024-11-20T06:17:37.190Z] Copying: 846/1024 [MB] (34 MBps) [2024-11-20T06:17:38.124Z] Copying: 880/1024 [MB] (34 MBps) [2024-11-20T06:17:39.058Z] Copying: 926/1024 [MB] (45 MBps) [2024-11-20T06:17:39.997Z] Copying: 970/1024 [MB] (44 MBps) [2024-11-20T06:17:39.997Z] Copying: 1016/1024 [MB] (45 MBps) [2024-11-20T06:17:39.997Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-20 06:17:39.953761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.364 [2024-11-20 06:17:39.953811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.364 [2024-11-20 06:17:39.953824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.364 [2024-11-20 06:17:39.953831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.364 [2024-11-20 06:17:39.953852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.364 [2024-11-20 06:17:39.956510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.364 [2024-11-20 06:17:39.956567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.364 [2024-11-20 06:17:39.956581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:19:20.364 [2024-11-20 06:17:39.956588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.364 [2024-11-20 06:17:39.958039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.364 [2024-11-20 06:17:39.958074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.364 [2024-11-20 06:17:39.958084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:19:20.364 [2024-11-20 06:17:39.958092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.364 [2024-11-20 06:17:39.972985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.364 [2024-11-20 06:17:39.973037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.364 [2024-11-20 06:17:39.973048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.878 ms 00:19:20.364 [2024-11-20 06:17:39.973064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.364 [2024-11-20 06:17:39.979229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.364 [2024-11-20 06:17:39.979254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.364 [2024-11-20 06:17:39.979263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:19:20.364 [2024-11-20 06:17:39.979272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.002457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.002498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.623 [2024-11-20 06:17:40.002508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:19:20.623 [2024-11-20 06:17:40.002516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.016260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.016292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.623 [2024-11-20 06:17:40.016303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:19:20.623 [2024-11-20 06:17:40.016312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.016434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.016448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.623 [2024-11-20 06:17:40.016457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:20.623 [2024-11-20 06:17:40.016464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.038939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.038970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.623 [2024-11-20 06:17:40.038979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.461 ms 00:19:20.623 [2024-11-20 06:17:40.038987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.060849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.060888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.623 [2024-11-20 06:17:40.060898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.831 ms 00:19:20.623 [2024-11-20 06:17:40.060906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.082905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.082941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.623 [2024-11-20 06:17:40.082952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.969 ms 00:19:20.623 [2024-11-20 06:17:40.082958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.104867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.623 [2024-11-20 06:17:40.104906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.623 [2024-11-20 06:17:40.104916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.852 ms 00:19:20.623 [2024-11-20 06:17:40.104923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.623 [2024-11-20 06:17:40.104958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.623 [2024-11-20 06:17:40.104973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.623 [2024-11-20 06:17:40.104990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.623 [2024-11-20 06:17:40.104998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.624 [2024-11-20 06:17:40.105673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.625 [2024-11-20 06:17:40.105750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.625 [2024-11-20 06:17:40.105761] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:19:20.625 [2024-11-20 06:17:40.105769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.625 [2024-11-20 06:17:40.105776] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.625 [2024-11-20 06:17:40.105783] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.625 [2024-11-20 06:17:40.105791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.625 [2024-11-20 06:17:40.105798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.625 [2024-11-20 06:17:40.105806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.625 [2024-11-20 06:17:40.105820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.625 [2024-11-20 06:17:40.105826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.625 [2024-11-20 06:17:40.105832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.625 [2024-11-20 06:17:40.105839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.625 [2024-11-20 06:17:40.105847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.625 [2024-11-20 06:17:40.105855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:19:20.625 [2024-11-20 06:17:40.105863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.118147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.625 [2024-11-20 06:17:40.118182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.625 [2024-11-20 06:17:40.118193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.265 ms 00:19:20.625 [2024-11-20 06:17:40.118201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.118560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.625 [2024-11-20 06:17:40.118580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.625 [2024-11-20 06:17:40.118592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:20.625 [2024-11-20 06:17:40.118600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.150985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.625 [2024-11-20 06:17:40.151019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.625 [2024-11-20 06:17:40.151029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.625 [2024-11-20 06:17:40.151037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.151090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.625 [2024-11-20 06:17:40.151098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.625 [2024-11-20 06:17:40.151110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.625 [2024-11-20 06:17:40.151117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.151190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.625 [2024-11-20 06:17:40.151200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.625 [2024-11-20 06:17:40.151208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.625 [2024-11-20 06:17:40.151215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.151229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.625 [2024-11-20 06:17:40.151237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.625 [2024-11-20 06:17:40.151244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.625 [2024-11-20 06:17:40.151252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.625 [2024-11-20 06:17:40.227793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.625 [2024-11-20 06:17:40.227830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.625 [2024-11-20 06:17:40.227841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.625 [2024-11-20 06:17:40.227849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.882 [2024-11-20 06:17:40.290272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.883 [2024-11-20 06:17:40.290327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.883 [2024-11-20 06:17:40.290406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.883 [2024-11-20 06:17:40.290476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.883 [2024-11-20 06:17:40.290612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.883 [2024-11-20 06:17:40.290663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.883 [2024-11-20 06:17:40.290723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.883 [2024-11-20 06:17:40.290778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.883 [2024-11-20 06:17:40.290785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.883 [2024-11-20 06:17:40.290792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.883 [2024-11-20 06:17:40.290942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.137 ms, result 0 00:19:22.257 00:19:22.257 00:19:22.257 06:17:41 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:22.257 [2024-11-20 06:17:41.689829] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:19:22.257 [2024-11-20 06:17:41.689954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75469 ] 00:19:22.257 [2024-11-20 06:17:41.849215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.515 [2024-11-20 06:17:41.948699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.775 [2024-11-20 06:17:42.200150] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.775 [2024-11-20 06:17:42.200209] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.775 [2024-11-20 06:17:42.353853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.353898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.775 [2024-11-20 06:17:42.353915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.775 [2024-11-20 06:17:42.353923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.353968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.353978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.775 [2024-11-20 06:17:42.353988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:22.775 [2024-11-20 06:17:42.353995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.354011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.775 [2024-11-20 06:17:42.354715] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.775 [2024-11-20 06:17:42.354736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.354744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.775 [2024-11-20 06:17:42.354752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:19:22.775 [2024-11-20 06:17:42.354759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.355899] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.775 [2024-11-20 06:17:42.367898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.367928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.775 [2024-11-20 06:17:42.367939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.002 ms 00:19:22.775 [2024-11-20 06:17:42.367947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.368001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.368011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.775 [2024-11-20 06:17:42.368019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:22.775 [2024-11-20 06:17:42.368026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.372723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.372752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.775 [2024-11-20 06:17:42.372761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:19:22.775 [2024-11-20 06:17:42.372776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.372855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.372864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.775 [2024-11-20 06:17:42.372878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:22.775 [2024-11-20 06:17:42.372886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.372942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.372955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.775 [2024-11-20 06:17:42.372966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:22.775 [2024-11-20 06:17:42.372973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.372999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.775 [2024-11-20 06:17:42.376420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.376443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.775 [2024-11-20 06:17:42.376451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:19:22.775 [2024-11-20 06:17:42.376461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.376489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.376507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.775 [2024-11-20 06:17:42.376514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:22.775 [2024-11-20 06:17:42.376521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.376539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.775 [2024-11-20 06:17:42.376557] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.775 [2024-11-20 06:17:42.376590] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.775 [2024-11-20 06:17:42.376607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.775 [2024-11-20 06:17:42.376709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.775 [2024-11-20 06:17:42.376720] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.775 [2024-11-20 06:17:42.376729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.775 [2024-11-20 06:17:42.376739] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.775 [2024-11-20 06:17:42.376747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.775 [2024-11-20 06:17:42.376755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.775 [2024-11-20 06:17:42.376762] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.775 [2024-11-20 06:17:42.376769] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.775 [2024-11-20 06:17:42.376778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.775 [2024-11-20 06:17:42.376785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.376793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.775 [2024-11-20 06:17:42.376800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:22.775 [2024-11-20 06:17:42.376807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.376888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.775 [2024-11-20 06:17:42.376897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.775 [2024-11-20 06:17:42.376904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.775 [2024-11-20 06:17:42.376911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.775 [2024-11-20 06:17:42.377024] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.775 [2024-11-20 06:17:42.377035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.775 [2024-11-20 06:17:42.377043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.775 [2024-11-20 06:17:42.377050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.775 [2024-11-20 06:17:42.377065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.775 [2024-11-20 06:17:42.377079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.775 [2024-11-20 06:17:42.377087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.775 [2024-11-20 06:17:42.377100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.775 [2024-11-20 06:17:42.377106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.775 [2024-11-20 06:17:42.377113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.775 [2024-11-20 06:17:42.377120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.775 [2024-11-20 06:17:42.377127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.775 [2024-11-20 06:17:42.377138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.775 [2024-11-20 06:17:42.377151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.775 [2024-11-20 06:17:42.377157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.775 [2024-11-20 06:17:42.377172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.775 [2024-11-20 06:17:42.377178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.775 [2024-11-20 06:17:42.377184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.776 [2024-11-20 06:17:42.377191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.776 [2024-11-20 06:17:42.377205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.776 [2024-11-20 06:17:42.377211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.776 [2024-11-20 06:17:42.377223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.776 [2024-11-20 06:17:42.377230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.776 [2024-11-20 06:17:42.377242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.776 [2024-11-20 06:17:42.377248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.776 [2024-11-20 06:17:42.377261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.776 [2024-11-20 06:17:42.377268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.776 [2024-11-20 06:17:42.377274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.776 [2024-11-20 06:17:42.377280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.776 [2024-11-20 06:17:42.377287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.776 [2024-11-20 06:17:42.377293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.776 [2024-11-20 06:17:42.377305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.776 [2024-11-20 06:17:42.377312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377318] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.776 [2024-11-20 06:17:42.377325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.776 [2024-11-20 06:17:42.377333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.776 [2024-11-20 06:17:42.377340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.776 [2024-11-20 06:17:42.377348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.776 [2024-11-20 06:17:42.377355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.776 [2024-11-20 06:17:42.377361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.776 [2024-11-20 06:17:42.377368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.776 [2024-11-20 06:17:42.377374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.776 [2024-11-20 06:17:42.377382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.776 [2024-11-20 06:17:42.377390] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.776 [2024-11-20 06:17:42.377398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.776 [2024-11-20 06:17:42.377414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.776 [2024-11-20 06:17:42.377421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.776 [2024-11-20 06:17:42.377428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.776 [2024-11-20 06:17:42.377435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.776 [2024-11-20 06:17:42.377442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.776 [2024-11-20 06:17:42.377449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.776 [2024-11-20 06:17:42.377455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.776 [2024-11-20 06:17:42.377463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.776 [2024-11-20 06:17:42.377469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.776 [2024-11-20 06:17:42.377516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.776 [2024-11-20 06:17:42.377526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.776 [2024-11-20 06:17:42.377543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.776 [2024-11-20 06:17:42.377550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.776 [2024-11-20 06:17:42.377557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.776 [2024-11-20 06:17:42.377565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.776 [2024-11-20 06:17:42.377572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.776 [2024-11-20 06:17:42.377582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:19:22.776 [2024-11-20 06:17:42.377589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 06:17:42.403101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.776 [2024-11-20 06:17:42.403132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.776 [2024-11-20 06:17:42.403142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:19:22.776 [2024-11-20 06:17:42.403150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.776 [2024-11-20 06:17:42.403232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.776 [2024-11-20 06:17:42.403240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.776 [2024-11-20 06:17:42.403248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:22.776 [2024-11-20 06:17:42.403255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.446556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.446597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.038 [2024-11-20 06:17:42.446609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.251 ms 00:19:23.038 [2024-11-20 06:17:42.446617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.446663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.446672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.038 [2024-11-20 06:17:42.446684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:23.038 [2024-11-20 06:17:42.446691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.447067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.447090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.038 [2024-11-20 06:17:42.447099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:23.038 [2024-11-20 06:17:42.447107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.447227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.447235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.038 [2024-11-20 06:17:42.447244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:23.038 [2024-11-20 06:17:42.447256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.460674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.460713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.038 [2024-11-20 06:17:42.460730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.396 ms 00:19:23.038 [2024-11-20 06:17:42.460738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.473312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:19:23.038 [2024-11-20 06:17:42.473345] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.038 [2024-11-20 06:17:42.473358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.473368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.038 [2024-11-20 06:17:42.473377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:19:23.038 [2024-11-20 06:17:42.473384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.497977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.498014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.038 [2024-11-20 06:17:42.498026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.550 ms 00:19:23.038 [2024-11-20 06:17:42.498034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.509513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.038 [2024-11-20 06:17:42.509552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.038 [2024-11-20 06:17:42.509562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.427 ms 00:19:23.038 [2024-11-20 06:17:42.509569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.038 [2024-11-20 06:17:42.520794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.520822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.039 [2024-11-20 06:17:42.520832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.190 ms 00:19:23.039 [2024-11-20 06:17:42.520839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.521451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.521473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.039 [2024-11-20 06:17:42.521482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:23.039 [2024-11-20 06:17:42.521503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.575726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.575768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.039 [2024-11-20 06:17:42.575785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.205 ms 00:19:23.039 [2024-11-20 06:17:42.575793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.586140] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:23.039 [2024-11-20 06:17:42.588551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.588578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.039 [2024-11-20 06:17:42.588590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.716 ms 00:19:23.039 [2024-11-20 06:17:42.588599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.588693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.588704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.039 [2024-11-20 06:17:42.588712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:23.039 [2024-11-20 06:17:42.588722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.588783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.588800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.039 [2024-11-20 06:17:42.588808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:23.039 [2024-11-20 06:17:42.588816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.588834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.588842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.039 [2024-11-20 06:17:42.588850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.039 [2024-11-20 06:17:42.588857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.588887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.039 [2024-11-20 06:17:42.588897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.588905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.039 [2024-11-20 06:17:42.588913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:23.039 [2024-11-20 06:17:42.588920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.611703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.611733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.039 [2024-11-20 06:17:42.611744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:19:23.039 [2024-11-20 06:17:42.611756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.611825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.039 [2024-11-20 06:17:42.611834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.039 [2024-11-20 06:17:42.611842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:23.039 [2024-11-20 06:17:42.611849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.039 [2024-11-20 06:17:42.612854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.589 ms, result 0 00:19:24.413  [2024-11-20T06:17:44.981Z] Copying: 944/1048576 [kB] (944 kBps) [2024-11-20T06:17:45.919Z] Copying: 35/1024 [MB] (34 MBps) [2024-11-20T06:17:46.933Z] Copying: 64/1024 [MB] (28 MBps) [2024-11-20T06:17:47.868Z] Copying: 87/1024 [MB] (22 MBps) [2024-11-20T06:17:48.801Z] Copying: 126/1024 [MB] (38 MBps) [2024-11-20T06:17:50.173Z] Copying: 172/1024 [MB] (46 MBps) [2024-11-20T06:17:51.106Z] Copying: 220/1024 [MB] (47 MBps) [2024-11-20T06:17:52.047Z] Copying: 263/1024 [MB] (43 MBps) [2024-11-20T06:17:52.986Z] Copying: 301/1024 [MB] (37 MBps) [2024-11-20T06:17:53.931Z] Copying: 326/1024 [MB] (24 MBps) [2024-11-20T06:17:54.870Z] Copying: 344/1024 [MB] (18 MBps) [2024-11-20T06:17:55.814Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-20T06:17:57.255Z] Copying: 373/1024 [MB] (12 MBps) [2024-11-20T06:17:57.826Z] Copying: 387/1024 [MB] (14 MBps) [2024-11-20T06:17:59.210Z] Copying: 406/1024 [MB] (18 MBps) [2024-11-20T06:18:00.149Z] Copying: 422/1024 [MB] (16 MBps) [2024-11-20T06:18:01.089Z] Copying: 438/1024 [MB] (16 MBps) [2024-11-20T06:18:02.031Z] Copying: 455/1024 [MB] (16 MBps) [2024-11-20T06:18:02.969Z] Copying: 472/1024 [MB] (17 MBps) [2024-11-20T06:18:03.909Z] Copying: 496/1024 [MB] (23 MBps) [2024-11-20T06:18:04.851Z] Copying: 508/1024 [MB] (12 MBps) [2024-11-20T06:18:05.863Z] Copying: 523/1024 [MB] (15 MBps) [2024-11-20T06:18:06.799Z] Copying: 550/1024 [MB] (27 MBps) [2024-11-20T06:18:08.187Z] Copying: 583/1024 [MB] (32 MBps) [2024-11-20T06:18:09.126Z] Copying: 599/1024 [MB] (16 MBps) [2024-11-20T06:18:10.067Z] Copying: 614/1024 [MB] (14 MBps) [2024-11-20T06:18:11.009Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-20T06:18:11.950Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-20T06:18:12.912Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-20T06:18:13.857Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-20T06:18:14.800Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-20T06:18:15.819Z] Copying: 679/1024 [MB] (10 MBps) [2024-11-20T06:18:17.205Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-20T06:18:18.148Z] Copying: 702/1024 [MB] (12 MBps) [2024-11-20T06:18:19.092Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-20T06:18:20.115Z] Copying: 738/1024 [MB] (24 MBps) [2024-11-20T06:18:21.055Z] Copying: 759/1024 [MB] (21 MBps) [2024-11-20T06:18:21.997Z] Copying: 777/1024 [MB] (17 MBps) [2024-11-20T06:18:22.939Z] Copying: 802/1024 [MB] (25 MBps) [2024-11-20T06:18:23.884Z] Copying: 820/1024 [MB] (17 MBps) [2024-11-20T06:18:24.827Z] Copying: 838/1024 [MB] (18 MBps) [2024-11-20T06:18:26.292Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-20T06:18:26.863Z] Copying: 871/1024 [MB] (15 MBps) [2024-11-20T06:18:27.807Z] Copying: 883/1024 [MB] (12 MBps) [2024-11-20T06:18:29.193Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-20T06:18:30.131Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-20T06:18:31.074Z] Copying: 918/1024 [MB] (13 MBps) [2024-11-20T06:18:32.017Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-20T06:18:32.961Z] Copying: 961648/1048576 [kB] (9712 kBps) [2024-11-20T06:18:33.906Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-20T06:18:34.896Z] Copying: 982100/1048576 [kB] (10156 kBps) [2024-11-20T06:18:35.843Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-20T06:18:37.225Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-20T06:18:37.798Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-20T06:18:39.185Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-20T06:18:39.756Z] Copying: 1012/1024 [MB] (11 MBps) [2024-11-20T06:18:39.756Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 06:18:39.729384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.123 [2024-11-20 06:18:39.729453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.124 [2024-11-20 06:18:39.729467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.124 [2024-11-20 06:18:39.729486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.124 [2024-11-20 06:18:39.729533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.124 [2024-11-20 06:18:39.732683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.124 [2024-11-20 06:18:39.732718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.124 [2024-11-20 06:18:39.732730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:20:20.124 [2024-11-20 06:18:39.732738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.124 [2024-11-20 06:18:39.732965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.124 [2024-11-20 06:18:39.732975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.124 [2024-11-20 06:18:39.732984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:20.124 [2024-11-20 06:18:39.732991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.124 [2024-11-20 06:18:39.744869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.124 [2024-11-20 06:18:39.744904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.124 [2024-11-20 06:18:39.744916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.859 ms 00:20:20.124 [2024-11-20 06:18:39.744923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.124 [2024-11-20 06:18:39.751075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.124 [2024-11-20 06:18:39.751102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.124 [2024-11-20 06:18:39.751111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:20:20.124 [2024-11-20 06:18:39.751119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.385 [2024-11-20 06:18:39.775970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.385 [2024-11-20 06:18:39.776013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.385 [2024-11-20 06:18:39.776025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:20:20.385 [2024-11-20 06:18:39.776033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.385 [2024-11-20 06:18:39.789483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.385 [2024-11-20 06:18:39.789530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.385 [2024-11-20 06:18:39.789540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.416 ms 00:20:20.385 [2024-11-20 06:18:39.789549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.647 [2024-11-20 06:18:40.192642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.647 [2024-11-20 06:18:40.192730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.647 [2024-11-20 06:18:40.192746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 403.063 ms 00:20:20.647 [2024-11-20 06:18:40.192755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.647 [2024-11-20 06:18:40.217920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.647 [2024-11-20 06:18:40.217975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.647 [2024-11-20 06:18:40.217989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.148 ms 00:20:20.647 [2024-11-20 06:18:40.217996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.647 [2024-11-20 06:18:40.242010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.647 [2024-11-20 06:18:40.242059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.647 [2024-11-20 06:18:40.242081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.975 ms 00:20:20.647 [2024-11-20 06:18:40.242089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.647 [2024-11-20 06:18:40.265110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.647 [2024-11-20 06:18:40.265161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.647 [2024-11-20 06:18:40.265174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.982 ms 00:20:20.647 [2024-11-20 06:18:40.265183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.910 [2024-11-20 06:18:40.288360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.910 [2024-11-20 06:18:40.288404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.910 [2024-11-20 06:18:40.288416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.108 ms 00:20:20.910 [2024-11-20 06:18:40.288423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.910 [2024-11-20 06:18:40.288457] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.910 [2024-11-20 06:18:40.288472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:20:20.910 [2024-11-20 06:18:40.288482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.910 [2024-11-20 06:18:40.288653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.288996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.911 [2024-11-20 06:18:40.289192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.912 [2024-11-20 06:18:40.289246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.912 [2024-11-20 06:18:40.289253] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65e3eda6-79b4-4218-8e10-01f7dd4585d1 00:20:20.912 [2024-11-20 06:18:40.289261] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:20:20.912 [2024-11-20 06:18:40.289268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132544 00:20:20.912 [2024-11-20 06:18:40.289274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131584 00:20:20.912 [2024-11-20 06:18:40.289282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:20:20.912 [2024-11-20 06:18:40.289289] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.912 [2024-11-20 06:18:40.289301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.912 [2024-11-20 06:18:40.289308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.912 [2024-11-20 06:18:40.289322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.912 [2024-11-20 06:18:40.289328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.912 [2024-11-20 06:18:40.289335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.912 [2024-11-20 06:18:40.289343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.912 [2024-11-20 06:18:40.289351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:20:20.912 [2024-11-20 06:18:40.289358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.301764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.912 [2024-11-20 06:18:40.301803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.912 [2024-11-20 06:18:40.301815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:20:20.912 [2024-11-20 06:18:40.301827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.302187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.912 [2024-11-20 06:18:40.302201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.912 [2024-11-20 06:18:40.302210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:20.912 [2024-11-20 06:18:40.302217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.334720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.334770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.912 [2024-11-20 06:18:40.334781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.334789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.334859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.334868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.912 [2024-11-20 06:18:40.334876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.334883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.334961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.334970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.912 [2024-11-20 06:18:40.334981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.334988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.335002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.335010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.912 [2024-11-20 06:18:40.335017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.335024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.412458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.412531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.912 [2024-11-20 06:18:40.412550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.412558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.912 [2024-11-20 06:18:40.475499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.912 [2024-11-20 06:18:40.475573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.912 [2024-11-20 06:18:40.475652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.912 [2024-11-20 06:18:40.475761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.912 [2024-11-20 06:18:40.475817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.912 [2024-11-20 06:18:40.475872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.475921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.912 [2024-11-20 06:18:40.475931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.912 [2024-11-20 06:18:40.475939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.912 [2024-11-20 06:18:40.475946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.912 [2024-11-20 06:18:40.476057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 746.645 ms, result 0 00:20:21.888 00:20:21.888 00:20:21.888 06:18:41 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:23.891 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74417 00:20:23.891 06:18:43 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74417 ']' 00:20:23.891 06:18:43 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74417 00:20:23.891 Process with pid 74417 is not found 00:20:23.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74417) - No such process 00:20:23.891 06:18:43 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74417 is not found' 00:20:23.891 Remove shared memory files 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:23.891 06:18:43 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:23.891 00:20:23.891 real 2m43.538s 00:20:23.891 user 2m32.511s 00:20:23.891 sys 0m11.569s 00:20:23.891 ************************************ 00:20:23.891 END TEST ftl_restore 00:20:23.891 ************************************ 00:20:23.891 06:18:43 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:23.891 06:18:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:23.891 06:18:43 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:23.891 06:18:43 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:23.891 06:18:43 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:23.891 06:18:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:24.153 ************************************ 00:20:24.153 START TEST ftl_dirty_shutdown 00:20:24.153 ************************************ 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:24.153 * Looking for test storage... 00:20:24.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:24.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.153 --rc genhtml_branch_coverage=1 00:20:24.153 --rc genhtml_function_coverage=1 00:20:24.153 --rc genhtml_legend=1 00:20:24.153 --rc geninfo_all_blocks=1 00:20:24.153 --rc geninfo_unexecuted_blocks=1 00:20:24.153 00:20:24.153 ' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:24.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.153 --rc genhtml_branch_coverage=1 00:20:24.153 --rc genhtml_function_coverage=1 00:20:24.153 --rc genhtml_legend=1 00:20:24.153 --rc geninfo_all_blocks=1 00:20:24.153 --rc geninfo_unexecuted_blocks=1 00:20:24.153 00:20:24.153 ' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:24.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.153 --rc genhtml_branch_coverage=1 00:20:24.153 --rc genhtml_function_coverage=1 00:20:24.153 --rc genhtml_legend=1 00:20:24.153 --rc geninfo_all_blocks=1 00:20:24.153 --rc geninfo_unexecuted_blocks=1 00:20:24.153 00:20:24.153 ' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:24.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.153 --rc genhtml_branch_coverage=1 00:20:24.153 --rc genhtml_function_coverage=1 00:20:24.153 --rc genhtml_legend=1 00:20:24.153 --rc geninfo_all_blocks=1 00:20:24.153 --rc geninfo_unexecuted_blocks=1 00:20:24.153 00:20:24.153 ' 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:24.153 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76172 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76172 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 76172 ']' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:24.154 06:18:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:24.154 [2024-11-20 06:18:43.779904] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:24.154 [2024-11-20 06:18:43.780029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76172 ] 00:20:24.415 [2024-11-20 06:18:43.939108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.415 [2024-11-20 06:18:44.042273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:20:25.358 06:18:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:25.620 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:25.620 { 00:20:25.620 "name": "nvme0n1", 00:20:25.620 "aliases": [ 00:20:25.620 "b4a87788-bedc-4526-841b-6e35429dd48d" 00:20:25.620 ], 00:20:25.620 "product_name": "NVMe disk", 00:20:25.620 "block_size": 4096, 00:20:25.620 "num_blocks": 1310720, 00:20:25.620 "uuid": "b4a87788-bedc-4526-841b-6e35429dd48d", 00:20:25.620 "numa_id": -1, 00:20:25.620 "assigned_rate_limits": { 00:20:25.620 "rw_ios_per_sec": 0, 00:20:25.620 "rw_mbytes_per_sec": 0, 00:20:25.620 "r_mbytes_per_sec": 0, 00:20:25.620 "w_mbytes_per_sec": 0 00:20:25.620 }, 00:20:25.620 "claimed": true, 00:20:25.620 "claim_type": "read_many_write_one", 00:20:25.620 "zoned": false, 00:20:25.620 "supported_io_types": { 00:20:25.620 "read": true, 00:20:25.620 "write": true, 00:20:25.620 "unmap": true, 00:20:25.620 "flush": true, 00:20:25.620 "reset": true, 00:20:25.620 "nvme_admin": true, 00:20:25.620 "nvme_io": true, 00:20:25.620 "nvme_io_md": false, 00:20:25.620 "write_zeroes": true, 00:20:25.620 "zcopy": false, 00:20:25.620 "get_zone_info": false, 00:20:25.620 "zone_management": false, 00:20:25.620 "zone_append": false, 00:20:25.620 "compare": true, 00:20:25.620 "compare_and_write": false, 00:20:25.620 "abort": true, 00:20:25.620 "seek_hole": false, 00:20:25.620 "seek_data": false, 00:20:25.620 "copy": true, 00:20:25.620 "nvme_iov_md": false 00:20:25.620 }, 00:20:25.620 "driver_specific": { 00:20:25.620 "nvme": [ 00:20:25.620 { 00:20:25.620 "pci_address": "0000:00:11.0", 00:20:25.620 "trid": { 00:20:25.620 "trtype": "PCIe", 00:20:25.620 "traddr": "0000:00:11.0" 00:20:25.620 }, 00:20:25.620 "ctrlr_data": { 00:20:25.620 "cntlid": 0, 00:20:25.620 "vendor_id": "0x1b36", 00:20:25.620 "model_number": "QEMU NVMe Ctrl", 00:20:25.620 "serial_number": "12341", 00:20:25.620 "firmware_revision": "8.0.0", 00:20:25.620 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:25.620 "oacs": { 00:20:25.620 "security": 0, 00:20:25.620 "format": 1, 00:20:25.620 "firmware": 0, 00:20:25.620 "ns_manage": 1 00:20:25.620 }, 00:20:25.620 "multi_ctrlr": false, 00:20:25.620 "ana_reporting": false 00:20:25.620 }, 00:20:25.620 "vs": { 00:20:25.620 "nvme_version": "1.4" 00:20:25.620 }, 00:20:25.620 "ns_data": { 00:20:25.620 "id": 1, 00:20:25.620 "can_share": false 00:20:25.620 } 00:20:25.620 } 00:20:25.620 ], 00:20:25.620 "mp_policy": "active_passive" 00:20:25.620 } 00:20:25.620 } 00:20:25.620 ]' 00:20:25.620 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:25.620 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:20:25.620 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b2defe2b-ecf7-48f6-82db-dd897c68e77d 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:25.882 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2defe2b-ecf7-48f6-82db-dd897c68e77d 00:20:26.144 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:26.404 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ed02ffdd-a8a8-4c8b-9f70-375ac812139f 00:20:26.404 06:18:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ed02ffdd-a8a8-4c8b-9f70-375ac812139f 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ba036cb9-6879-4538-941a-0a056531e707 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ba036cb9-6879-4538-941a-0a056531e707 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ba036cb9-6879-4538-941a-0a056531e707 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ba036cb9-6879-4538-941a-0a056531e707 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ba036cb9-6879-4538-941a-0a056531e707 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:20:26.664 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba036cb9-6879-4538-941a-0a056531e707 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:26.925 { 00:20:26.925 "name": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:26.925 "aliases": [ 00:20:26.925 "lvs/nvme0n1p0" 00:20:26.925 ], 00:20:26.925 "product_name": "Logical Volume", 00:20:26.925 "block_size": 4096, 00:20:26.925 "num_blocks": 26476544, 00:20:26.925 "uuid": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:26.925 "assigned_rate_limits": { 00:20:26.925 "rw_ios_per_sec": 0, 00:20:26.925 "rw_mbytes_per_sec": 0, 00:20:26.925 "r_mbytes_per_sec": 0, 00:20:26.925 "w_mbytes_per_sec": 0 00:20:26.925 }, 00:20:26.925 "claimed": false, 00:20:26.925 "zoned": false, 00:20:26.925 "supported_io_types": { 00:20:26.925 "read": true, 00:20:26.925 "write": true, 00:20:26.925 "unmap": true, 00:20:26.925 "flush": false, 00:20:26.925 "reset": true, 00:20:26.925 "nvme_admin": false, 00:20:26.925 "nvme_io": false, 00:20:26.925 "nvme_io_md": false, 00:20:26.925 "write_zeroes": true, 00:20:26.925 "zcopy": false, 00:20:26.925 "get_zone_info": false, 00:20:26.925 "zone_management": false, 00:20:26.925 "zone_append": false, 00:20:26.925 "compare": false, 00:20:26.925 "compare_and_write": false, 00:20:26.925 "abort": false, 00:20:26.925 "seek_hole": true, 00:20:26.925 "seek_data": true, 00:20:26.925 "copy": false, 00:20:26.925 "nvme_iov_md": false 00:20:26.925 }, 00:20:26.925 "driver_specific": { 00:20:26.925 "lvol": { 00:20:26.925 "lvol_store_uuid": "ed02ffdd-a8a8-4c8b-9f70-375ac812139f", 00:20:26.925 "base_bdev": "nvme0n1", 00:20:26.925 "thin_provision": true, 00:20:26.925 "num_allocated_clusters": 0, 00:20:26.925 "snapshot": false, 00:20:26.925 "clone": false, 00:20:26.925 "esnap_clone": false 00:20:26.925 } 00:20:26.925 } 00:20:26.925 } 00:20:26.925 ]' 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:26.925 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ba036cb9-6879-4538-941a-0a056531e707 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ba036cb9-6879-4538-941a-0a056531e707 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:20:27.187 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba036cb9-6879-4538-941a-0a056531e707 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:27.448 { 00:20:27.448 "name": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:27.448 "aliases": [ 00:20:27.448 "lvs/nvme0n1p0" 00:20:27.448 ], 00:20:27.448 "product_name": "Logical Volume", 00:20:27.448 "block_size": 4096, 00:20:27.448 "num_blocks": 26476544, 00:20:27.448 "uuid": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:27.448 "assigned_rate_limits": { 00:20:27.448 "rw_ios_per_sec": 0, 00:20:27.448 "rw_mbytes_per_sec": 0, 00:20:27.448 "r_mbytes_per_sec": 0, 00:20:27.448 "w_mbytes_per_sec": 0 00:20:27.448 }, 00:20:27.448 "claimed": false, 00:20:27.448 "zoned": false, 00:20:27.448 "supported_io_types": { 00:20:27.448 "read": true, 00:20:27.448 "write": true, 00:20:27.448 "unmap": true, 00:20:27.448 "flush": false, 00:20:27.448 "reset": true, 00:20:27.448 "nvme_admin": false, 00:20:27.448 "nvme_io": false, 00:20:27.448 "nvme_io_md": false, 00:20:27.448 "write_zeroes": true, 00:20:27.448 "zcopy": false, 00:20:27.448 "get_zone_info": false, 00:20:27.448 "zone_management": false, 00:20:27.448 "zone_append": false, 00:20:27.448 "compare": false, 00:20:27.448 "compare_and_write": false, 00:20:27.448 "abort": false, 00:20:27.448 "seek_hole": true, 00:20:27.448 "seek_data": true, 00:20:27.448 "copy": false, 00:20:27.448 "nvme_iov_md": false 00:20:27.448 }, 00:20:27.448 "driver_specific": { 00:20:27.448 "lvol": { 00:20:27.448 "lvol_store_uuid": "ed02ffdd-a8a8-4c8b-9f70-375ac812139f", 00:20:27.448 "base_bdev": "nvme0n1", 00:20:27.448 "thin_provision": true, 00:20:27.448 "num_allocated_clusters": 0, 00:20:27.448 "snapshot": false, 00:20:27.448 "clone": false, 00:20:27.448 "esnap_clone": false 00:20:27.448 } 00:20:27.448 } 00:20:27.448 } 00:20:27.448 ]' 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:27.448 06:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ba036cb9-6879-4538-941a-0a056531e707 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ba036cb9-6879-4538-941a-0a056531e707 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:20:27.710 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba036cb9-6879-4538-941a-0a056531e707 00:20:27.970 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:27.970 { 00:20:27.970 "name": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:27.970 "aliases": [ 00:20:27.970 "lvs/nvme0n1p0" 00:20:27.970 ], 00:20:27.970 "product_name": "Logical Volume", 00:20:27.970 "block_size": 4096, 00:20:27.970 "num_blocks": 26476544, 00:20:27.970 "uuid": "ba036cb9-6879-4538-941a-0a056531e707", 00:20:27.970 "assigned_rate_limits": { 00:20:27.970 "rw_ios_per_sec": 0, 00:20:27.970 "rw_mbytes_per_sec": 0, 00:20:27.970 "r_mbytes_per_sec": 0, 00:20:27.970 "w_mbytes_per_sec": 0 00:20:27.970 }, 00:20:27.970 "claimed": false, 00:20:27.970 "zoned": false, 00:20:27.970 "supported_io_types": { 00:20:27.971 "read": true, 00:20:27.971 "write": true, 00:20:27.971 "unmap": true, 00:20:27.971 "flush": false, 00:20:27.971 "reset": true, 00:20:27.971 "nvme_admin": false, 00:20:27.971 "nvme_io": false, 00:20:27.971 "nvme_io_md": false, 00:20:27.971 "write_zeroes": true, 00:20:27.971 "zcopy": false, 00:20:27.971 "get_zone_info": false, 00:20:27.971 "zone_management": false, 00:20:27.971 "zone_append": false, 00:20:27.971 "compare": false, 00:20:27.971 "compare_and_write": false, 00:20:27.971 "abort": false, 00:20:27.971 "seek_hole": true, 00:20:27.971 "seek_data": true, 00:20:27.971 "copy": false, 00:20:27.971 "nvme_iov_md": false 00:20:27.971 }, 00:20:27.971 "driver_specific": { 00:20:27.971 "lvol": { 00:20:27.971 "lvol_store_uuid": "ed02ffdd-a8a8-4c8b-9f70-375ac812139f", 00:20:27.971 "base_bdev": "nvme0n1", 00:20:27.971 "thin_provision": true, 00:20:27.971 "num_allocated_clusters": 0, 00:20:27.971 "snapshot": false, 00:20:27.971 "clone": false, 00:20:27.971 "esnap_clone": false 00:20:27.971 } 00:20:27.971 } 00:20:27.971 } 00:20:27.971 ]' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ba036cb9-6879-4538-941a-0a056531e707 --l2p_dram_limit 10' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:27.971 06:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ba036cb9-6879-4538-941a-0a056531e707 --l2p_dram_limit 10 -c nvc0n1p0 00:20:28.232 [2024-11-20 06:18:47.640021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.640077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.232 [2024-11-20 06:18:47.640093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:28.232 [2024-11-20 06:18:47.640101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.640160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.640170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.232 [2024-11-20 06:18:47.640180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:28.232 [2024-11-20 06:18:47.640188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.640213] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.232 [2024-11-20 06:18:47.641059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.232 [2024-11-20 06:18:47.641093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.641101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.232 [2024-11-20 06:18:47.641112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:20:28.232 [2024-11-20 06:18:47.641119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.641189] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9774b9d5-ae7a-4993-b44f-33dde2820a77 00:20:28.232 [2024-11-20 06:18:47.642315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.642356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:28.232 [2024-11-20 06:18:47.642366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:28.232 [2024-11-20 06:18:47.642375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.647619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.647654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.232 [2024-11-20 06:18:47.647664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:20:28.232 [2024-11-20 06:18:47.647674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.647759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.647770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.232 [2024-11-20 06:18:47.647778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:28.232 [2024-11-20 06:18:47.647790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.647852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.647864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.232 [2024-11-20 06:18:47.647872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:28.232 [2024-11-20 06:18:47.647883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.647904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.232 [2024-11-20 06:18:47.651486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.651524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.232 [2024-11-20 06:18:47.651537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:20:28.232 [2024-11-20 06:18:47.651545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.651577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.651585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.232 [2024-11-20 06:18:47.651594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:28.232 [2024-11-20 06:18:47.651601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.651619] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:28.232 [2024-11-20 06:18:47.651759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.232 [2024-11-20 06:18:47.651774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.232 [2024-11-20 06:18:47.651784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:28.232 [2024-11-20 06:18:47.651796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.232 [2024-11-20 06:18:47.651804] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.232 [2024-11-20 06:18:47.651813] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:28.232 [2024-11-20 06:18:47.651821] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.232 [2024-11-20 06:18:47.651831] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.232 [2024-11-20 06:18:47.651838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.232 [2024-11-20 06:18:47.651847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.651854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.232 [2024-11-20 06:18:47.651863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:28.232 [2024-11-20 06:18:47.651876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.651962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.232 [2024-11-20 06:18:47.651975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.232 [2024-11-20 06:18:47.651984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:28.232 [2024-11-20 06:18:47.651991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.232 [2024-11-20 06:18:47.652105] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.232 [2024-11-20 06:18:47.652120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.232 [2024-11-20 06:18:47.652130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.232 [2024-11-20 06:18:47.652138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.232 [2024-11-20 06:18:47.652146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.232 [2024-11-20 06:18:47.652153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.232 [2024-11-20 06:18:47.652161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:28.232 [2024-11-20 06:18:47.652168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.232 [2024-11-20 06:18:47.652176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:28.232 [2024-11-20 06:18:47.652182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.232 [2024-11-20 06:18:47.652191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.232 [2024-11-20 06:18:47.652197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:28.232 [2024-11-20 06:18:47.652206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.232 [2024-11-20 06:18:47.652212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.232 [2024-11-20 06:18:47.652223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:28.232 [2024-11-20 06:18:47.652230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.232 [2024-11-20 06:18:47.652240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.232 [2024-11-20 06:18:47.652246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:28.232 [2024-11-20 06:18:47.652255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.232 [2024-11-20 06:18:47.652262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.233 [2024-11-20 06:18:47.652270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.233 [2024-11-20 06:18:47.652291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.233 [2024-11-20 06:18:47.652313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.233 [2024-11-20 06:18:47.652333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.233 [2024-11-20 06:18:47.652357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.233 [2024-11-20 06:18:47.652371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.233 [2024-11-20 06:18:47.652377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:28.233 [2024-11-20 06:18:47.652385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.233 [2024-11-20 06:18:47.652392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.233 [2024-11-20 06:18:47.652400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:28.233 [2024-11-20 06:18:47.652407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.233 [2024-11-20 06:18:47.652436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:28.233 [2024-11-20 06:18:47.652444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.233 [2024-11-20 06:18:47.652459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.233 [2024-11-20 06:18:47.652466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.233 [2024-11-20 06:18:47.652484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.233 [2024-11-20 06:18:47.652507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.233 [2024-11-20 06:18:47.652515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.233 [2024-11-20 06:18:47.652523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.233 [2024-11-20 06:18:47.652530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.233 [2024-11-20 06:18:47.652538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.233 [2024-11-20 06:18:47.652548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.233 [2024-11-20 06:18:47.652559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:28.233 [2024-11-20 06:18:47.652578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:28.233 [2024-11-20 06:18:47.652584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:28.233 [2024-11-20 06:18:47.652593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:28.233 [2024-11-20 06:18:47.652600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:28.233 [2024-11-20 06:18:47.652608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:28.233 [2024-11-20 06:18:47.652615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:28.233 [2024-11-20 06:18:47.652624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:28.233 [2024-11-20 06:18:47.652631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:28.233 [2024-11-20 06:18:47.652641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:28.233 [2024-11-20 06:18:47.652679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.233 [2024-11-20 06:18:47.652689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.233 [2024-11-20 06:18:47.652706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.233 [2024-11-20 06:18:47.652713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.233 [2024-11-20 06:18:47.652721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.233 [2024-11-20 06:18:47.652729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.233 [2024-11-20 06:18:47.652737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.233 [2024-11-20 06:18:47.652745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:20:28.233 [2024-11-20 06:18:47.652754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.233 [2024-11-20 06:18:47.652797] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:28.233 [2024-11-20 06:18:47.652810] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:31.611 [2024-11-20 06:18:50.871608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.871669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:31.611 [2024-11-20 06:18:50.871686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3218.797 ms 00:20:31.611 [2024-11-20 06:18:50.871699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.898464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.898529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.611 [2024-11-20 06:18:50.898542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.523 ms 00:20:31.611 [2024-11-20 06:18:50.898553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.898698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.898710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.611 [2024-11-20 06:18:50.898719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:31.611 [2024-11-20 06:18:50.898733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.930120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.930170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.611 [2024-11-20 06:18:50.930183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.352 ms 00:20:31.611 [2024-11-20 06:18:50.930195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.930233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.930246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.611 [2024-11-20 06:18:50.930256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.611 [2024-11-20 06:18:50.930265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.930662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.930689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.611 [2024-11-20 06:18:50.930698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:20:31.611 [2024-11-20 06:18:50.930707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.930824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.930849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.611 [2024-11-20 06:18:50.930859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:31.611 [2024-11-20 06:18:50.930870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.945370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.945412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.611 [2024-11-20 06:18:50.945423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:20:31.611 [2024-11-20 06:18:50.945433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:50.956876] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:31.611 [2024-11-20 06:18:50.959642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:50.959673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.611 [2024-11-20 06:18:50.959685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.113 ms 00:20:31.611 [2024-11-20 06:18:50.959694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:51.045288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:51.045357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:31.611 [2024-11-20 06:18:51.045376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.558 ms 00:20:31.611 [2024-11-20 06:18:51.045386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:51.045590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:51.045605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.611 [2024-11-20 06:18:51.045617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:20:31.611 [2024-11-20 06:18:51.045625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:51.069769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:51.069823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:31.611 [2024-11-20 06:18:51.069838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.092 ms 00:20:31.611 [2024-11-20 06:18:51.069846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.611 [2024-11-20 06:18:51.093894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.611 [2024-11-20 06:18:51.093944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:31.612 [2024-11-20 06:18:51.093959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:20:31.612 [2024-11-20 06:18:51.093966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.094544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.094561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.612 [2024-11-20 06:18:51.094571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:31.612 [2024-11-20 06:18:51.094581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.165606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.165666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:31.612 [2024-11-20 06:18:51.165687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.981 ms 00:20:31.612 [2024-11-20 06:18:51.165696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.191481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.191542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:31.612 [2024-11-20 06:18:51.191558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:20:31.612 [2024-11-20 06:18:51.191567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.216088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.216136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:31.612 [2024-11-20 06:18:51.216150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.474 ms 00:20:31.612 [2024-11-20 06:18:51.216158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.241681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.241731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.612 [2024-11-20 06:18:51.241745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.477 ms 00:20:31.612 [2024-11-20 06:18:51.241752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.241800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.241809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.612 [2024-11-20 06:18:51.241821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.612 [2024-11-20 06:18:51.241829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.241909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.612 [2024-11-20 06:18:51.241919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.612 [2024-11-20 06:18:51.241931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:31.612 [2024-11-20 06:18:51.241938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.612 [2024-11-20 06:18:51.242792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3602.343 ms, result 0 00:20:31.873 { 00:20:31.873 "name": "ftl0", 00:20:31.873 "uuid": "9774b9d5-ae7a-4993-b44f-33dde2820a77" 00:20:31.873 } 00:20:31.873 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:31.873 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:31.873 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:31.873 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:31.873 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:32.134 /dev/nbd0 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:32.134 1+0 records in 00:20:32.134 1+0 records out 00:20:32.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477176 s, 8.6 MB/s 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:20:32.134 06:18:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:32.396 [2024-11-20 06:18:51.786256] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:32.396 [2024-11-20 06:18:51.786393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76309 ] 00:20:32.396 [2024-11-20 06:18:51.946951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.661 [2024-11-20 06:18:52.072708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.047  [2024-11-20T06:18:54.624Z] Copying: 193/1024 [MB] (193 MBps) [2024-11-20T06:18:55.564Z] Copying: 387/1024 [MB] (193 MBps) [2024-11-20T06:18:56.561Z] Copying: 583/1024 [MB] (195 MBps) [2024-11-20T06:18:57.502Z] Copying: 776/1024 [MB] (193 MBps) [2024-11-20T06:18:57.764Z] Copying: 969/1024 [MB] (192 MBps) [2024-11-20T06:18:58.335Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:20:38.702 00:20:38.964 06:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:40.944 06:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:40.944 [2024-11-20 06:19:00.548358] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:20:40.944 [2024-11-20 06:19:00.548678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76407 ] 00:20:41.205 [2024-11-20 06:19:00.709008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.205 [2024-11-20 06:19:00.815089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.594  [2024-11-20T06:19:03.167Z] Copying: 5380/1048576 [kB] (5380 kBps) [2024-11-20T06:19:04.106Z] Copying: 14560/1048576 [kB] (9180 kBps) [2024-11-20T06:19:05.046Z] Copying: 23352/1048576 [kB] (8792 kBps) [2024-11-20T06:19:06.432Z] Copying: 32072/1048576 [kB] (8720 kBps) [2024-11-20T06:19:07.373Z] Copying: 40384/1048576 [kB] (8312 kBps) [2024-11-20T06:19:08.314Z] Copying: 47872/1048576 [kB] (7488 kBps) [2024-11-20T06:19:09.260Z] Copying: 55980/1048576 [kB] (8108 kBps) [2024-11-20T06:19:10.260Z] Copying: 64624/1048576 [kB] (8644 kBps) [2024-11-20T06:19:11.204Z] Copying: 74216/1048576 [kB] (9592 kBps) [2024-11-20T06:19:12.149Z] Copying: 82312/1048576 [kB] (8096 kBps) [2024-11-20T06:19:13.105Z] Copying: 91368/1048576 [kB] (9056 kBps) [2024-11-20T06:19:14.040Z] Copying: 100048/1048576 [kB] (8680 kBps) [2024-11-20T06:19:15.424Z] Copying: 108/1024 [MB] (11 MBps) [2024-11-20T06:19:16.072Z] Copying: 121008/1048576 [kB] (9628 kBps) [2024-11-20T06:19:17.466Z] Copying: 130368/1048576 [kB] (9360 kBps) [2024-11-20T06:19:18.039Z] Copying: 140044/1048576 [kB] (9676 kBps) [2024-11-20T06:19:19.437Z] Copying: 150/1024 [MB] (13 MBps) [2024-11-20T06:19:20.384Z] Copying: 165/1024 [MB] (14 MBps) [2024-11-20T06:19:21.327Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-20T06:19:22.271Z] Copying: 189/1024 [MB] (13 MBps) [2024-11-20T06:19:23.210Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-20T06:19:24.153Z] Copying: 212/1024 [MB] (12 MBps) [2024-11-20T06:19:25.096Z] Copying: 228/1024 [MB] (15 MBps) [2024-11-20T06:19:26.040Z] Copying: 248/1024 [MB] (20 MBps) [2024-11-20T06:19:27.423Z] Copying: 265/1024 [MB] (16 MBps) [2024-11-20T06:19:28.362Z] Copying: 279/1024 [MB] (13 MBps) [2024-11-20T06:19:29.303Z] Copying: 291/1024 [MB] (12 MBps) [2024-11-20T06:19:30.305Z] Copying: 307344/1048576 [kB] (9292 kBps) [2024-11-20T06:19:31.287Z] Copying: 316288/1048576 [kB] (8944 kBps) [2024-11-20T06:19:32.227Z] Copying: 325664/1048576 [kB] (9376 kBps) [2024-11-20T06:19:33.171Z] Copying: 330/1024 [MB] (12 MBps) [2024-11-20T06:19:34.138Z] Copying: 347/1024 [MB] (17 MBps) [2024-11-20T06:19:35.080Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-20T06:19:36.466Z] Copying: 378/1024 [MB] (15 MBps) [2024-11-20T06:19:37.042Z] Copying: 398/1024 [MB] (20 MBps) [2024-11-20T06:19:38.429Z] Copying: 412/1024 [MB] (13 MBps) [2024-11-20T06:19:39.376Z] Copying: 430768/1048576 [kB] (8808 kBps) [2024-11-20T06:19:40.318Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-20T06:19:41.259Z] Copying: 452244/1048576 [kB] (9996 kBps) [2024-11-20T06:19:42.195Z] Copying: 454/1024 [MB] (12 MBps) [2024-11-20T06:19:43.136Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-20T06:19:44.079Z] Copying: 479/1024 [MB] (13 MBps) [2024-11-20T06:19:45.467Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-20T06:19:46.038Z] Copying: 500/1024 [MB] (10 MBps) [2024-11-20T06:19:47.055Z] Copying: 521680/1048576 [kB] (8808 kBps) [2024-11-20T06:19:48.440Z] Copying: 531208/1048576 [kB] (9528 kBps) [2024-11-20T06:19:49.392Z] Copying: 533/1024 [MB] (14 MBps) [2024-11-20T06:19:50.343Z] Copying: 553924/1048576 [kB] (8132 kBps) [2024-11-20T06:19:51.331Z] Copying: 562384/1048576 [kB] (8460 kBps) [2024-11-20T06:19:52.273Z] Copying: 571280/1048576 [kB] (8896 kBps) [2024-11-20T06:19:53.218Z] Copying: 569/1024 [MB] (11 MBps) [2024-11-20T06:19:54.162Z] Copying: 593084/1048576 [kB] (10176 kBps) [2024-11-20T06:19:55.165Z] Copying: 601776/1048576 [kB] (8692 kBps) [2024-11-20T06:19:56.107Z] Copying: 610320/1048576 [kB] (8544 kBps) [2024-11-20T06:19:57.052Z] Copying: 618412/1048576 [kB] (8092 kBps) [2024-11-20T06:19:58.441Z] Copying: 625972/1048576 [kB] (7560 kBps) [2024-11-20T06:19:59.383Z] Copying: 634328/1048576 [kB] (8356 kBps) [2024-11-20T06:20:00.325Z] Copying: 642372/1048576 [kB] (8044 kBps) [2024-11-20T06:20:01.265Z] Copying: 651192/1048576 [kB] (8820 kBps) [2024-11-20T06:20:02.206Z] Copying: 659920/1048576 [kB] (8728 kBps) [2024-11-20T06:20:03.149Z] Copying: 668352/1048576 [kB] (8432 kBps) [2024-11-20T06:20:04.161Z] Copying: 677040/1048576 [kB] (8688 kBps) [2024-11-20T06:20:05.106Z] Copying: 684880/1048576 [kB] (7840 kBps) [2024-11-20T06:20:06.048Z] Copying: 694268/1048576 [kB] (9388 kBps) [2024-11-20T06:20:07.434Z] Copying: 702352/1048576 [kB] (8084 kBps) [2024-11-20T06:20:08.377Z] Copying: 712532/1048576 [kB] (10180 kBps) [2024-11-20T06:20:09.322Z] Copying: 721640/1048576 [kB] (9108 kBps) [2024-11-20T06:20:10.264Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-20T06:20:11.285Z] Copying: 740964/1048576 [kB] (8344 kBps) [2024-11-20T06:20:12.227Z] Copying: 749372/1048576 [kB] (8408 kBps) [2024-11-20T06:20:13.168Z] Copying: 757580/1048576 [kB] (8208 kBps) [2024-11-20T06:20:14.111Z] Copying: 765672/1048576 [kB] (8092 kBps) [2024-11-20T06:20:15.138Z] Copying: 773692/1048576 [kB] (8020 kBps) [2024-11-20T06:20:16.077Z] Copying: 781768/1048576 [kB] (8076 kBps) [2024-11-20T06:20:17.451Z] Copying: 777/1024 [MB] (13 MBps) [2024-11-20T06:20:18.067Z] Copying: 808/1024 [MB] (30 MBps) [2024-11-20T06:20:19.450Z] Copying: 836/1024 [MB] (28 MBps) [2024-11-20T06:20:20.382Z] Copying: 866/1024 [MB] (29 MBps) [2024-11-20T06:20:21.316Z] Copying: 896/1024 [MB] (29 MBps) [2024-11-20T06:20:22.249Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-20T06:20:23.183Z] Copying: 954/1024 [MB] (28 MBps) [2024-11-20T06:20:24.130Z] Copying: 983/1024 [MB] (28 MBps) [2024-11-20T06:20:24.696Z] Copying: 1013/1024 [MB] (30 MBps) [2024-11-20T06:20:25.264Z] Copying: 1024/1024 [MB] (average 12 MBps) 00:22:05.631 00:22:05.631 06:20:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:05.631 06:20:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:05.632 06:20:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:05.891 [2024-11-20 06:20:25.347270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.891 [2024-11-20 06:20:25.347326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.891 [2024-11-20 06:20:25.347341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.891 [2024-11-20 06:20:25.347351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.891 [2024-11-20 06:20:25.347377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.891 [2024-11-20 06:20:25.350023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.891 [2024-11-20 06:20:25.350054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.891 [2024-11-20 06:20:25.350068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:22:05.891 [2024-11-20 06:20:25.350077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.891 [2024-11-20 06:20:25.353202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.891 [2024-11-20 06:20:25.353235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.891 [2024-11-20 06:20:25.353247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.090 ms 00:22:05.891 [2024-11-20 06:20:25.353255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.891 [2024-11-20 06:20:25.369307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.891 [2024-11-20 06:20:25.369344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.891 [2024-11-20 06:20:25.369357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.030 ms 00:22:05.891 [2024-11-20 06:20:25.369365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.891 [2024-11-20 06:20:25.375586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.375615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.892 [2024-11-20 06:20:25.375628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:22:05.892 [2024-11-20 06:20:25.375637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.399318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.399356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.892 [2024-11-20 06:20:25.399369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.614 ms 00:22:05.892 [2024-11-20 06:20:25.399377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.414784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.414820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.892 [2024-11-20 06:20:25.414834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:22:05.892 [2024-11-20 06:20:25.414846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.415001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.415013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.892 [2024-11-20 06:20:25.415024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:05.892 [2024-11-20 06:20:25.415034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.439258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.439293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:05.892 [2024-11-20 06:20:25.439306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.202 ms 00:22:05.892 [2024-11-20 06:20:25.439315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.462436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.462471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:05.892 [2024-11-20 06:20:25.462485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:22:05.892 [2024-11-20 06:20:25.462500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.485812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.485846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.892 [2024-11-20 06:20:25.485860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.270 ms 00:22:05.892 [2024-11-20 06:20:25.485867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.508944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.892 [2024-11-20 06:20:25.508979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.892 [2024-11-20 06:20:25.508991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.998 ms 00:22:05.892 [2024-11-20 06:20:25.508999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.892 [2024-11-20 06:20:25.509035] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.892 [2024-11-20 06:20:25.509050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.892 [2024-11-20 06:20:25.509596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.893 [2024-11-20 06:20:25.509932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.893 [2024-11-20 06:20:25.509941] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9774b9d5-ae7a-4993-b44f-33dde2820a77 00:22:05.893 [2024-11-20 06:20:25.509949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:05.893 [2024-11-20 06:20:25.509960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:05.893 [2024-11-20 06:20:25.509967] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:05.893 [2024-11-20 06:20:25.509981] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:05.893 [2024-11-20 06:20:25.509988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.893 [2024-11-20 06:20:25.509997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.893 [2024-11-20 06:20:25.510004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.893 [2024-11-20 06:20:25.510013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.893 [2024-11-20 06:20:25.510019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.893 [2024-11-20 06:20:25.510029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.893 [2024-11-20 06:20:25.510036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.893 [2024-11-20 06:20:25.510046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:22:05.893 [2024-11-20 06:20:25.510053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.893 [2024-11-20 06:20:25.522241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.893 [2024-11-20 06:20:25.522277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.893 [2024-11-20 06:20:25.522289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.154 ms 00:22:05.893 [2024-11-20 06:20:25.522297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.893 [2024-11-20 06:20:25.522676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.893 [2024-11-20 06:20:25.522687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.893 [2024-11-20 06:20:25.522698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:22:05.893 [2024-11-20 06:20:25.522705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.564122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.564163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.150 [2024-11-20 06:20:25.564176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.564184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.564249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.564258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.150 [2024-11-20 06:20:25.564267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.564274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.564349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.564360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.150 [2024-11-20 06:20:25.564370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.564378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.564398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.564405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.150 [2024-11-20 06:20:25.564414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.564422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.641029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.641078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.150 [2024-11-20 06:20:25.641091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.641099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.704246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.704291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.150 [2024-11-20 06:20:25.704304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.150 [2024-11-20 06:20:25.704312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.150 [2024-11-20 06:20:25.704396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.150 [2024-11-20 06:20:25.704405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.151 [2024-11-20 06:20:25.704415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.151 [2024-11-20 06:20:25.704482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.151 [2024-11-20 06:20:25.704509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.151 [2024-11-20 06:20:25.704616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.151 [2024-11-20 06:20:25.704626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.151 [2024-11-20 06:20:25.704677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:06.151 [2024-11-20 06:20:25.704686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.151 [2024-11-20 06:20:25.704736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.151 [2024-11-20 06:20:25.704745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.151 [2024-11-20 06:20:25.704807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.151 [2024-11-20 06:20:25.704818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.151 [2024-11-20 06:20:25.704825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.151 [2024-11-20 06:20:25.704948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.649 ms, result 0 00:22:06.151 true 00:22:06.151 06:20:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76172 00:22:06.151 06:20:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76172 00:22:06.151 06:20:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:06.409 [2024-11-20 06:20:25.791901] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:06.409 [2024-11-20 06:20:25.792022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77284 ] 00:22:06.409 [2024-11-20 06:20:25.945831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.666 [2024-11-20 06:20:26.047568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.631  [2024-11-20T06:20:28.643Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-20T06:20:29.575Z] Copying: 385/1024 [MB] (192 MBps) [2024-11-20T06:20:30.527Z] Copying: 577/1024 [MB] (191 MBps) [2024-11-20T06:20:31.462Z] Copying: 768/1024 [MB] (191 MBps) [2024-11-20T06:20:31.719Z] Copying: 959/1024 [MB] (191 MBps) [2024-11-20T06:20:32.285Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:22:12.652 00:22:12.652 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76172 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:12.910 06:20:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:12.910 [2024-11-20 06:20:32.341324] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:12.910 [2024-11-20 06:20:32.341425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:22:12.910 [2024-11-20 06:20:32.490820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.168 [2024-11-20 06:20:32.576162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.168 [2024-11-20 06:20:32.790541] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.168 [2024-11-20 06:20:32.790596] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.427 [2024-11-20 06:20:32.853461] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:13.427 [2024-11-20 06:20:32.853676] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:13.427 [2024-11-20 06:20:32.854021] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:13.427 [2024-11-20 06:20:33.039433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.039485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.427 [2024-11-20 06:20:33.039510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.427 [2024-11-20 06:20:33.039517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.039574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.039583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.427 [2024-11-20 06:20:33.039590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:13.427 [2024-11-20 06:20:33.039596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.039611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.427 [2024-11-20 06:20:33.040193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.427 [2024-11-20 06:20:33.040206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.040212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.427 [2024-11-20 06:20:33.040219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:22:13.427 [2024-11-20 06:20:33.040225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.041258] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.427 [2024-11-20 06:20:33.051131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.051265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.427 [2024-11-20 06:20:33.051280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.874 ms 00:22:13.427 [2024-11-20 06:20:33.051287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.051332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.051340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.427 [2024-11-20 06:20:33.051346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:13.427 [2024-11-20 06:20:33.051352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.056251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.056280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.427 [2024-11-20 06:20:33.056288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:22:13.427 [2024-11-20 06:20:33.056295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.056353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.056360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.427 [2024-11-20 06:20:33.056366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:13.427 [2024-11-20 06:20:33.056372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.056418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.427 [2024-11-20 06:20:33.056426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.427 [2024-11-20 06:20:33.056433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:13.427 [2024-11-20 06:20:33.056438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.427 [2024-11-20 06:20:33.056454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.686 [2024-11-20 06:20:33.059406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.686 [2024-11-20 06:20:33.059433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.686 [2024-11-20 06:20:33.059442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:22:13.686 [2024-11-20 06:20:33.059448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.686 [2024-11-20 06:20:33.059476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.686 [2024-11-20 06:20:33.059483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.686 [2024-11-20 06:20:33.059499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.686 [2024-11-20 06:20:33.059505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.686 [2024-11-20 06:20:33.059523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.686 [2024-11-20 06:20:33.059538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.686 [2024-11-20 06:20:33.059566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.686 [2024-11-20 06:20:33.059578] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:13.686 [2024-11-20 06:20:33.059659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.686 [2024-11-20 06:20:33.059667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.686 [2024-11-20 06:20:33.059676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.686 [2024-11-20 06:20:33.059684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.686 [2024-11-20 06:20:33.059692] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.686 [2024-11-20 06:20:33.059699] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.686 [2024-11-20 06:20:33.059705] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.686 [2024-11-20 06:20:33.059710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.686 [2024-11-20 06:20:33.059716] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.686 [2024-11-20 06:20:33.059722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.686 [2024-11-20 06:20:33.059727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.686 [2024-11-20 06:20:33.059734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:22:13.686 [2024-11-20 06:20:33.059739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.686 [2024-11-20 06:20:33.059805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.686 [2024-11-20 06:20:33.059813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.686 [2024-11-20 06:20:33.059819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:13.686 [2024-11-20 06:20:33.059825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.059903] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.687 [2024-11-20 06:20:33.059911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.687 [2024-11-20 06:20:33.059917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.687 [2024-11-20 06:20:33.059923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.059929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.687 [2024-11-20 06:20:33.059934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.059940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.687 [2024-11-20 06:20:33.059945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.687 [2024-11-20 06:20:33.059951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.687 [2024-11-20 06:20:33.059957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.687 [2024-11-20 06:20:33.059962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.687 [2024-11-20 06:20:33.059972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.687 [2024-11-20 06:20:33.059977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.687 [2024-11-20 06:20:33.059982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.687 [2024-11-20 06:20:33.059989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.687 [2024-11-20 06:20:33.059994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.059999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.687 [2024-11-20 06:20:33.060004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.687 [2024-11-20 06:20:33.060020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.687 [2024-11-20 06:20:33.060035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.687 [2024-11-20 06:20:33.060050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.687 [2024-11-20 06:20:33.060065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.687 [2024-11-20 06:20:33.060080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.687 [2024-11-20 06:20:33.060090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.687 [2024-11-20 06:20:33.060095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.687 [2024-11-20 06:20:33.060100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.687 [2024-11-20 06:20:33.060105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.687 [2024-11-20 06:20:33.060111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.687 [2024-11-20 06:20:33.060116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.687 [2024-11-20 06:20:33.060126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.687 [2024-11-20 06:20:33.060131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060137] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.687 [2024-11-20 06:20:33.060143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.687 [2024-11-20 06:20:33.060149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.687 [2024-11-20 06:20:33.060164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.687 [2024-11-20 06:20:33.060169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.687 [2024-11-20 06:20:33.060174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.687 [2024-11-20 06:20:33.060180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.687 [2024-11-20 06:20:33.060185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.687 [2024-11-20 06:20:33.060190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.687 [2024-11-20 06:20:33.060196] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.687 [2024-11-20 06:20:33.060203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.687 [2024-11-20 06:20:33.060216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.687 [2024-11-20 06:20:33.060221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.687 [2024-11-20 06:20:33.060227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.687 [2024-11-20 06:20:33.060232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.687 [2024-11-20 06:20:33.060238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.687 [2024-11-20 06:20:33.060243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.687 [2024-11-20 06:20:33.060249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.687 [2024-11-20 06:20:33.060254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.687 [2024-11-20 06:20:33.060260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.687 [2024-11-20 06:20:33.060286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.687 [2024-11-20 06:20:33.060292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.687 [2024-11-20 06:20:33.060304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.687 [2024-11-20 06:20:33.060310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.687 [2024-11-20 06:20:33.060315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.687 [2024-11-20 06:20:33.060321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.060326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.687 [2024-11-20 06:20:33.060332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:22:13.687 [2024-11-20 06:20:33.060340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.083044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.083082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.687 [2024-11-20 06:20:33.083091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.658 ms 00:22:13.687 [2024-11-20 06:20:33.083098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.083177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.083187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.687 [2024-11-20 06:20:33.083194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:13.687 [2024-11-20 06:20:33.083200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.132554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.132600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.687 [2024-11-20 06:20:33.132614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.299 ms 00:22:13.687 [2024-11-20 06:20:33.132621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.132674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.132682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.687 [2024-11-20 06:20:33.132689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:13.687 [2024-11-20 06:20:33.132695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.687 [2024-11-20 06:20:33.133068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.687 [2024-11-20 06:20:33.133089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.688 [2024-11-20 06:20:33.133097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:13.688 [2024-11-20 06:20:33.133104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.133212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.133219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.688 [2024-11-20 06:20:33.133226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:13.688 [2024-11-20 06:20:33.133232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.144412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.144451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.688 [2024-11-20 06:20:33.144462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.161 ms 00:22:13.688 [2024-11-20 06:20:33.144468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.154728] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:13.688 [2024-11-20 06:20:33.154764] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.688 [2024-11-20 06:20:33.154774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.154781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.688 [2024-11-20 06:20:33.154790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.192 ms 00:22:13.688 [2024-11-20 06:20:33.154796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.174344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.174386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.688 [2024-11-20 06:20:33.174404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.505 ms 00:22:13.688 [2024-11-20 06:20:33.174411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.184519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.184555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.688 [2024-11-20 06:20:33.184564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.057 ms 00:22:13.688 [2024-11-20 06:20:33.184570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.193627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.193655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.688 [2024-11-20 06:20:33.193663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.022 ms 00:22:13.688 [2024-11-20 06:20:33.193669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.194156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.194174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.688 [2024-11-20 06:20:33.194182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:22:13.688 [2024-11-20 06:20:33.194188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.240924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.240976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.688 [2024-11-20 06:20:33.240989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.721 ms 00:22:13.688 [2024-11-20 06:20:33.240997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.250090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.688 [2024-11-20 06:20:33.253400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.253447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.688 [2024-11-20 06:20:33.253464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:22:13.688 [2024-11-20 06:20:33.253477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.253631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.253648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.688 [2024-11-20 06:20:33.253662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:13.688 [2024-11-20 06:20:33.253673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.253770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.253785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.688 [2024-11-20 06:20:33.253797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:13.688 [2024-11-20 06:20:33.253808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.253840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.253856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.688 [2024-11-20 06:20:33.253868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:13.688 [2024-11-20 06:20:33.253879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.253921] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.688 [2024-11-20 06:20:33.253936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.253947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.688 [2024-11-20 06:20:33.253959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:13.688 [2024-11-20 06:20:33.253970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.274748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.274792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.688 [2024-11-20 06:20:33.274803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.748 ms 00:22:13.688 [2024-11-20 06:20:33.274809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.274904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.688 [2024-11-20 06:20:33.274917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.688 [2024-11-20 06:20:33.274926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:13.688 [2024-11-20 06:20:33.274937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.688 [2024-11-20 06:20:33.275867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 236.050 ms, result 0 00:22:15.060  [2024-11-20T06:20:35.627Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-20T06:20:36.561Z] Copying: 90/1024 [MB] (45 MBps) [2024-11-20T06:20:37.501Z] Copying: 134/1024 [MB] (43 MBps) [2024-11-20T06:20:38.456Z] Copying: 163/1024 [MB] (28 MBps) [2024-11-20T06:20:39.400Z] Copying: 194/1024 [MB] (31 MBps) [2024-11-20T06:20:40.338Z] Copying: 216/1024 [MB] (22 MBps) [2024-11-20T06:20:41.722Z] Copying: 234/1024 [MB] (17 MBps) [2024-11-20T06:20:42.293Z] Copying: 249/1024 [MB] (15 MBps) [2024-11-20T06:20:43.679Z] Copying: 264/1024 [MB] (15 MBps) [2024-11-20T06:20:44.621Z] Copying: 285/1024 [MB] (20 MBps) [2024-11-20T06:20:45.556Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-20T06:20:46.490Z] Copying: 309/1024 [MB] (12 MBps) [2024-11-20T06:20:47.422Z] Copying: 340/1024 [MB] (30 MBps) [2024-11-20T06:20:48.353Z] Copying: 372/1024 [MB] (32 MBps) [2024-11-20T06:20:49.327Z] Copying: 408/1024 [MB] (36 MBps) [2024-11-20T06:20:50.699Z] Copying: 451/1024 [MB] (43 MBps) [2024-11-20T06:20:51.631Z] Copying: 498/1024 [MB] (46 MBps) [2024-11-20T06:20:52.614Z] Copying: 519/1024 [MB] (21 MBps) [2024-11-20T06:20:53.560Z] Copying: 539/1024 [MB] (19 MBps) [2024-11-20T06:20:54.493Z] Copying: 579/1024 [MB] (40 MBps) [2024-11-20T06:20:55.427Z] Copying: 625/1024 [MB] (45 MBps) [2024-11-20T06:20:56.359Z] Copying: 671/1024 [MB] (46 MBps) [2024-11-20T06:20:57.292Z] Copying: 717/1024 [MB] (46 MBps) [2024-11-20T06:20:58.667Z] Copying: 764/1024 [MB] (46 MBps) [2024-11-20T06:20:59.599Z] Copying: 810/1024 [MB] (45 MBps) [2024-11-20T06:21:00.532Z] Copying: 856/1024 [MB] (45 MBps) [2024-11-20T06:21:01.464Z] Copying: 903/1024 [MB] (47 MBps) [2024-11-20T06:21:02.396Z] Copying: 952/1024 [MB] (48 MBps) [2024-11-20T06:21:03.329Z] Copying: 992/1024 [MB] (40 MBps) [2024-11-20T06:21:04.703Z] Copying: 1023/1024 [MB] (30 MBps) [2024-11-20T06:21:04.703Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 06:21:04.272815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-11-20 06:21:04.272874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:45.070 [2024-11-20 06:21:04.272888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.070 [2024-11-20 06:21:04.272897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-11-20 06:21:04.276167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:45.070 [2024-11-20 06:21:04.279487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-11-20 06:21:04.279539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:45.070 [2024-11-20 06:21:04.279550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:22:45.070 [2024-11-20 06:21:04.279558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-11-20 06:21:04.290427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-11-20 06:21:04.290481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:45.070 [2024-11-20 06:21:04.290501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.088 ms 00:22:45.070 [2024-11-20 06:21:04.290509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-11-20 06:21:04.311246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.311299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:45.071 [2024-11-20 06:21:04.311311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.719 ms 00:22:45.071 [2024-11-20 06:21:04.311320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.317522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.317563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:45.071 [2024-11-20 06:21:04.317575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:22:45.071 [2024-11-20 06:21:04.317583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.341297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.341351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:45.071 [2024-11-20 06:21:04.341365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.668 ms 00:22:45.071 [2024-11-20 06:21:04.341374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.355378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.355420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:45.071 [2024-11-20 06:21:04.355434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.962 ms 00:22:45.071 [2024-11-20 06:21:04.355442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.439165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.439222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:45.071 [2024-11-20 06:21:04.439244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.679 ms 00:22:45.071 [2024-11-20 06:21:04.439252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.463737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.463779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:45.071 [2024-11-20 06:21:04.463791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:22:45.071 [2024-11-20 06:21:04.463799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.486769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.486809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:45.071 [2024-11-20 06:21:04.486820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.929 ms 00:22:45.071 [2024-11-20 06:21:04.486828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.509104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.509145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:45.071 [2024-11-20 06:21:04.509157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:22:45.071 [2024-11-20 06:21:04.509164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.531607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.071 [2024-11-20 06:21:04.531648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:45.071 [2024-11-20 06:21:04.531660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:22:45.071 [2024-11-20 06:21:04.531668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.071 [2024-11-20 06:21:04.531710] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:45.071 [2024-11-20 06:21:04.531725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125440 / 261120 wr_cnt: 1 state: open 00:22:45.071 [2024-11-20 06:21:04.531736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.531994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:45.071 [2024-11-20 06:21:04.532192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:45.072 [2024-11-20 06:21:04.532543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:45.072 [2024-11-20 06:21:04.532550] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9774b9d5-ae7a-4993-b44f-33dde2820a77 00:22:45.072 [2024-11-20 06:21:04.532558] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125440 00:22:45.072 [2024-11-20 06:21:04.532569] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126400 00:22:45.072 [2024-11-20 06:21:04.532582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125440 00:22:45.072 [2024-11-20 06:21:04.532590] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:22:45.072 [2024-11-20 06:21:04.532597] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:45.072 [2024-11-20 06:21:04.532606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:45.072 [2024-11-20 06:21:04.532616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:45.072 [2024-11-20 06:21:04.532623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:45.072 [2024-11-20 06:21:04.532630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:45.072 [2024-11-20 06:21:04.532637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.072 [2024-11-20 06:21:04.532648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:45.072 [2024-11-20 06:21:04.532656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:22:45.072 [2024-11-20 06:21:04.532664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.544895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.072 [2024-11-20 06:21:04.544933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:45.072 [2024-11-20 06:21:04.544944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:22:45.072 [2024-11-20 06:21:04.544952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.545294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.072 [2024-11-20 06:21:04.545307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:45.072 [2024-11-20 06:21:04.545317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:45.072 [2024-11-20 06:21:04.545329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.577751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.072 [2024-11-20 06:21:04.577801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.072 [2024-11-20 06:21:04.577812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.072 [2024-11-20 06:21:04.577821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.577889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.072 [2024-11-20 06:21:04.577898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.072 [2024-11-20 06:21:04.577905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.072 [2024-11-20 06:21:04.577915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.577997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.072 [2024-11-20 06:21:04.578007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.072 [2024-11-20 06:21:04.578015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.072 [2024-11-20 06:21:04.578022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.578036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.072 [2024-11-20 06:21:04.578045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.072 [2024-11-20 06:21:04.578052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.072 [2024-11-20 06:21:04.578059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.072 [2024-11-20 06:21:04.654578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.072 [2024-11-20 06:21:04.654628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.072 [2024-11-20 06:21:04.654640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.072 [2024-11-20 06:21:04.654648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.338 [2024-11-20 06:21:04.717386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.338 [2024-11-20 06:21:04.717467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.338 [2024-11-20 06:21:04.717550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.338 [2024-11-20 06:21:04.717661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:45.338 [2024-11-20 06:21:04.717713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.338 [2024-11-20 06:21:04.717771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.338 [2024-11-20 06:21:04.717826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.338 [2024-11-20 06:21:04.717833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.338 [2024-11-20 06:21:04.717840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.338 [2024-11-20 06:21:04.717948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 447.543 ms, result 0 00:22:47.237 00:22:47.237 00:22:47.237 06:21:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:49.792 06:21:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:49.792 [2024-11-20 06:21:09.002631] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:22:49.792 [2024-11-20 06:21:09.002761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77725 ] 00:22:49.792 [2024-11-20 06:21:09.161187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.792 [2024-11-20 06:21:09.246489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.050 [2024-11-20 06:21:09.461419] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.051 [2024-11-20 06:21:09.461470] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.051 [2024-11-20 06:21:09.608974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.609012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.051 [2024-11-20 06:21:09.609026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.051 [2024-11-20 06:21:09.609032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.609070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.609078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.051 [2024-11-20 06:21:09.609087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:50.051 [2024-11-20 06:21:09.609093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.609107] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.051 [2024-11-20 06:21:09.609679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.051 [2024-11-20 06:21:09.609696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.609703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.051 [2024-11-20 06:21:09.609710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:22:50.051 [2024-11-20 06:21:09.609716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.610791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:50.051 [2024-11-20 06:21:09.620607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.620631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:50.051 [2024-11-20 06:21:09.620640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.817 ms 00:22:50.051 [2024-11-20 06:21:09.620646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.620699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.620707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:50.051 [2024-11-20 06:21:09.620713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:50.051 [2024-11-20 06:21:09.620719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.625435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.625457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.051 [2024-11-20 06:21:09.625465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:22:50.051 [2024-11-20 06:21:09.625474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.625539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.625546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.051 [2024-11-20 06:21:09.625553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:50.051 [2024-11-20 06:21:09.625559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.625605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.625612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.051 [2024-11-20 06:21:09.625619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.051 [2024-11-20 06:21:09.625625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.625645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.051 [2024-11-20 06:21:09.628406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.628427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.051 [2024-11-20 06:21:09.628434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:22:50.051 [2024-11-20 06:21:09.628442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.628465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.628472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.051 [2024-11-20 06:21:09.628478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:50.051 [2024-11-20 06:21:09.628484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.628513] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:50.051 [2024-11-20 06:21:09.628528] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:50.051 [2024-11-20 06:21:09.628557] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:50.051 [2024-11-20 06:21:09.628571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:50.051 [2024-11-20 06:21:09.628653] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.051 [2024-11-20 06:21:09.628661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.051 [2024-11-20 06:21:09.628669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.051 [2024-11-20 06:21:09.628678] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.051 [2024-11-20 06:21:09.628685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.051 [2024-11-20 06:21:09.628691] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.051 [2024-11-20 06:21:09.628696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.051 [2024-11-20 06:21:09.628702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.051 [2024-11-20 06:21:09.628709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.051 [2024-11-20 06:21:09.628715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.628721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.051 [2024-11-20 06:21:09.628727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:22:50.051 [2024-11-20 06:21:09.628733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.628801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.051 [2024-11-20 06:21:09.628807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.051 [2024-11-20 06:21:09.628813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:50.051 [2024-11-20 06:21:09.628818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.051 [2024-11-20 06:21:09.628899] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.051 [2024-11-20 06:21:09.628907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.051 [2024-11-20 06:21:09.628913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.051 [2024-11-20 06:21:09.628919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.051 [2024-11-20 06:21:09.628925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.051 [2024-11-20 06:21:09.628930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.051 [2024-11-20 06:21:09.628935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.051 [2024-11-20 06:21:09.628941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.051 [2024-11-20 06:21:09.628947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.051 [2024-11-20 06:21:09.628953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.051 [2024-11-20 06:21:09.628958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.051 [2024-11-20 06:21:09.628963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.051 [2024-11-20 06:21:09.628968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.051 [2024-11-20 06:21:09.628977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.051 [2024-11-20 06:21:09.628982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.051 [2024-11-20 06:21:09.628992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.051 [2024-11-20 06:21:09.628997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.051 [2024-11-20 06:21:09.629002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.051 [2024-11-20 06:21:09.629007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.051 [2024-11-20 06:21:09.629018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.051 [2024-11-20 06:21:09.629028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.051 [2024-11-20 06:21:09.629033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.051 [2024-11-20 06:21:09.629043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.051 [2024-11-20 06:21:09.629048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.051 [2024-11-20 06:21:09.629058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.051 [2024-11-20 06:21:09.629063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.051 [2024-11-20 06:21:09.629074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.051 [2024-11-20 06:21:09.629079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.051 [2024-11-20 06:21:09.629084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.051 [2024-11-20 06:21:09.629089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.051 [2024-11-20 06:21:09.629094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.051 [2024-11-20 06:21:09.629099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.051 [2024-11-20 06:21:09.629104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.052 [2024-11-20 06:21:09.629109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.052 [2024-11-20 06:21:09.629114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.052 [2024-11-20 06:21:09.629119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.052 [2024-11-20 06:21:09.629124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.052 [2024-11-20 06:21:09.629129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.052 [2024-11-20 06:21:09.629134] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.052 [2024-11-20 06:21:09.629140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.052 [2024-11-20 06:21:09.629149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.052 [2024-11-20 06:21:09.629155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.052 [2024-11-20 06:21:09.629160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.052 [2024-11-20 06:21:09.629165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.052 [2024-11-20 06:21:09.629171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.052 [2024-11-20 06:21:09.629176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.052 [2024-11-20 06:21:09.629181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.052 [2024-11-20 06:21:09.629186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.052 [2024-11-20 06:21:09.629192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.052 [2024-11-20 06:21:09.629199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.052 [2024-11-20 06:21:09.629211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.052 [2024-11-20 06:21:09.629216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.052 [2024-11-20 06:21:09.629222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.052 [2024-11-20 06:21:09.629228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.052 [2024-11-20 06:21:09.629233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.052 [2024-11-20 06:21:09.629239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.052 [2024-11-20 06:21:09.629245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.052 [2024-11-20 06:21:09.629250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.052 [2024-11-20 06:21:09.629256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.052 [2024-11-20 06:21:09.629284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.052 [2024-11-20 06:21:09.629292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.052 [2024-11-20 06:21:09.629305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.052 [2024-11-20 06:21:09.629311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.052 [2024-11-20 06:21:09.629317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.052 [2024-11-20 06:21:09.629324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.052 [2024-11-20 06:21:09.629329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.052 [2024-11-20 06:21:09.629337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:50.052 [2024-11-20 06:21:09.629343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.052 [2024-11-20 06:21:09.651177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.052 [2024-11-20 06:21:09.651204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.052 [2024-11-20 06:21:09.651213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.801 ms 00:22:50.052 [2024-11-20 06:21:09.651219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.052 [2024-11-20 06:21:09.651292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.052 [2024-11-20 06:21:09.651299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:50.052 [2024-11-20 06:21:09.651305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:50.052 [2024-11-20 06:21:09.651311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.693393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.693424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.311 [2024-11-20 06:21:09.693435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.039 ms 00:22:50.311 [2024-11-20 06:21:09.693441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.693484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.693500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.311 [2024-11-20 06:21:09.693510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:50.311 [2024-11-20 06:21:09.693516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.693845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.693865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.311 [2024-11-20 06:21:09.693874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:22:50.311 [2024-11-20 06:21:09.693880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.693977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.693987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.311 [2024-11-20 06:21:09.693993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:50.311 [2024-11-20 06:21:09.694003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.704617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.704638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.311 [2024-11-20 06:21:09.704646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.596 ms 00:22:50.311 [2024-11-20 06:21:09.704655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.714561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:50.311 [2024-11-20 06:21:09.714588] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:50.311 [2024-11-20 06:21:09.714598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.714605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:50.311 [2024-11-20 06:21:09.714613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.860 ms 00:22:50.311 [2024-11-20 06:21:09.714619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.733714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.733749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:50.311 [2024-11-20 06:21:09.733759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:22:50.311 [2024-11-20 06:21:09.733766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.743050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.743083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:50.311 [2024-11-20 06:21:09.743091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.244 ms 00:22:50.311 [2024-11-20 06:21:09.743098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.752094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.752118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:50.311 [2024-11-20 06:21:09.752126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.966 ms 00:22:50.311 [2024-11-20 06:21:09.752132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.752638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.752656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.311 [2024-11-20 06:21:09.752663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:22:50.311 [2024-11-20 06:21:09.752671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.797513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.797555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:50.311 [2024-11-20 06:21:09.797570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.827 ms 00:22:50.311 [2024-11-20 06:21:09.797578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.805895] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:50.311 [2024-11-20 06:21:09.808186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.808210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:50.311 [2024-11-20 06:21:09.808220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.566 ms 00:22:50.311 [2024-11-20 06:21:09.808227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.808296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.808306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:50.311 [2024-11-20 06:21:09.808312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:50.311 [2024-11-20 06:21:09.808321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.809563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.809586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.311 [2024-11-20 06:21:09.809594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:22:50.311 [2024-11-20 06:21:09.809600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.809622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.809629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.311 [2024-11-20 06:21:09.809635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:50.311 [2024-11-20 06:21:09.809641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.809671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:50.311 [2024-11-20 06:21:09.809679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.809685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:50.311 [2024-11-20 06:21:09.809691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:50.311 [2024-11-20 06:21:09.809697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.828580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.828608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:50.311 [2024-11-20 06:21:09.828617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.869 ms 00:22:50.311 [2024-11-20 06:21:09.828627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.828686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.311 [2024-11-20 06:21:09.828694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:50.311 [2024-11-20 06:21:09.828702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:50.311 [2024-11-20 06:21:09.828708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.311 [2024-11-20 06:21:09.829454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.142 ms, result 0 00:22:51.684  [2024-11-20T06:21:12.251Z] Copying: 1336/1048576 [kB] (1336 kBps) [2024-11-20T06:21:13.184Z] Copying: 11/1024 [MB] (10 MBps) [2024-11-20T06:21:14.116Z] Copying: 48/1024 [MB] (36 MBps) [2024-11-20T06:21:15.144Z] Copying: 89/1024 [MB] (41 MBps) [2024-11-20T06:21:16.076Z] Copying: 134/1024 [MB] (44 MBps) [2024-11-20T06:21:17.010Z] Copying: 175/1024 [MB] (40 MBps) [2024-11-20T06:21:18.382Z] Copying: 216/1024 [MB] (41 MBps) [2024-11-20T06:21:19.326Z] Copying: 258/1024 [MB] (42 MBps) [2024-11-20T06:21:20.262Z] Copying: 303/1024 [MB] (45 MBps) [2024-11-20T06:21:21.195Z] Copying: 350/1024 [MB] (46 MBps) [2024-11-20T06:21:22.132Z] Copying: 400/1024 [MB] (49 MBps) [2024-11-20T06:21:23.064Z] Copying: 451/1024 [MB] (50 MBps) [2024-11-20T06:21:23.996Z] Copying: 504/1024 [MB] (53 MBps) [2024-11-20T06:21:25.369Z] Copying: 553/1024 [MB] (49 MBps) [2024-11-20T06:21:26.302Z] Copying: 605/1024 [MB] (51 MBps) [2024-11-20T06:21:27.279Z] Copying: 651/1024 [MB] (46 MBps) [2024-11-20T06:21:28.223Z] Copying: 683/1024 [MB] (31 MBps) [2024-11-20T06:21:29.155Z] Copying: 727/1024 [MB] (44 MBps) [2024-11-20T06:21:30.090Z] Copying: 776/1024 [MB] (48 MBps) [2024-11-20T06:21:31.094Z] Copying: 826/1024 [MB] (50 MBps) [2024-11-20T06:21:32.026Z] Copying: 877/1024 [MB] (50 MBps) [2024-11-20T06:21:33.399Z] Copying: 928/1024 [MB] (51 MBps) [2024-11-20T06:21:33.982Z] Copying: 979/1024 [MB] (50 MBps) [2024-11-20T06:21:34.241Z] Copying: 1018/1024 [MB] (38 MBps) [2024-11-20T06:21:34.499Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-20 06:21:34.388206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.866 [2024-11-20 06:21:34.388273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:14.866 [2024-11-20 06:21:34.388284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:14.866 [2024-11-20 06:21:34.388291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.866 [2024-11-20 06:21:34.388308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.866 [2024-11-20 06:21:34.390534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.866 [2024-11-20 06:21:34.390563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:14.866 [2024-11-20 06:21:34.390572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:23:14.866 [2024-11-20 06:21:34.390580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.866 [2024-11-20 06:21:34.390761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.866 [2024-11-20 06:21:34.390775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:14.866 [2024-11-20 06:21:34.390783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:23:14.866 [2024-11-20 06:21:34.390789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.866 [2024-11-20 06:21:34.400478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.866 [2024-11-20 06:21:34.400517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:14.866 [2024-11-20 06:21:34.400525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.676 ms 00:23:14.866 [2024-11-20 06:21:34.400531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.866 [2024-11-20 06:21:34.405417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.867 [2024-11-20 06:21:34.405445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:14.867 [2024-11-20 06:21:34.405459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:23:14.867 [2024-11-20 06:21:34.405467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.867 [2024-11-20 06:21:34.490014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.867 [2024-11-20 06:21:34.490058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:14.867 [2024-11-20 06:21:34.490070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.507 ms 00:23:14.867 [2024-11-20 06:21:34.490076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.501565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.501598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.126 [2024-11-20 06:21:34.501609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.453 ms 00:23:15.126 [2024-11-20 06:21:34.501616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.503265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.503291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.126 [2024-11-20 06:21:34.503299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:23:15.126 [2024-11-20 06:21:34.503305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.521724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.521756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:15.126 [2024-11-20 06:21:34.521766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.402 ms 00:23:15.126 [2024-11-20 06:21:34.521772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.539594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.539622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:15.126 [2024-11-20 06:21:34.539637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.794 ms 00:23:15.126 [2024-11-20 06:21:34.539643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.557233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.557261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.126 [2024-11-20 06:21:34.557270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.565 ms 00:23:15.126 [2024-11-20 06:21:34.557276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.574617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.126 [2024-11-20 06:21:34.574643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.126 [2024-11-20 06:21:34.574651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.297 ms 00:23:15.126 [2024-11-20 06:21:34.574657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.126 [2024-11-20 06:21:34.574681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.126 [2024-11-20 06:21:34.574693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:15.126 [2024-11-20 06:21:34.574701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:23:15.126 [2024-11-20 06:21:34.574708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.126 [2024-11-20 06:21:34.574925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.574995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.127 [2024-11-20 06:21:34.575303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.127 [2024-11-20 06:21:34.575309] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9774b9d5-ae7a-4993-b44f-33dde2820a77 00:23:15.127 [2024-11-20 06:21:34.575315] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:23:15.127 [2024-11-20 06:21:34.575321] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 139200 00:23:15.127 [2024-11-20 06:21:34.575326] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 137216 00:23:15.127 [2024-11-20 06:21:34.575336] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:23:15.127 [2024-11-20 06:21:34.575342] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.127 [2024-11-20 06:21:34.575348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.127 [2024-11-20 06:21:34.575354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.127 [2024-11-20 06:21:34.575363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.127 [2024-11-20 06:21:34.575368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.127 [2024-11-20 06:21:34.575374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.127 [2024-11-20 06:21:34.575381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.127 [2024-11-20 06:21:34.575387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:23:15.127 [2024-11-20 06:21:34.575393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.127 [2024-11-20 06:21:34.585095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.127 [2024-11-20 06:21:34.585124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.127 [2024-11-20 06:21:34.585133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.689 ms 00:23:15.127 [2024-11-20 06:21:34.585138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.127 [2024-11-20 06:21:34.585410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.127 [2024-11-20 06:21:34.585424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.127 [2024-11-20 06:21:34.585431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:23:15.127 [2024-11-20 06:21:34.585436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.127 [2024-11-20 06:21:34.611252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.127 [2024-11-20 06:21:34.611283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.127 [2024-11-20 06:21:34.611291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.127 [2024-11-20 06:21:34.611298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.127 [2024-11-20 06:21:34.611344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.127 [2024-11-20 06:21:34.611351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.127 [2024-11-20 06:21:34.611357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.127 [2024-11-20 06:21:34.611362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.127 [2024-11-20 06:21:34.611425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.127 [2024-11-20 06:21:34.611433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.128 [2024-11-20 06:21:34.611439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.611445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.611457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.611463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.128 [2024-11-20 06:21:34.611469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.611475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.671865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.671899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.128 [2024-11-20 06:21:34.671908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.671914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.128 [2024-11-20 06:21:34.721583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.128 [2024-11-20 06:21:34.721646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.128 [2024-11-20 06:21:34.721706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.128 [2024-11-20 06:21:34.721796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:15.128 [2024-11-20 06:21:34.721835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.128 [2024-11-20 06:21:34.721880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.721919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.128 [2024-11-20 06:21:34.721926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.128 [2024-11-20 06:21:34.721932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.128 [2024-11-20 06:21:34.721938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.128 [2024-11-20 06:21:34.722025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.796 ms, result 0 00:23:15.693 00:23:15.693 00:23:15.951 06:21:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:18.477 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:18.477 06:21:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.477 [2024-11-20 06:21:37.551715] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:18.477 [2024-11-20 06:21:37.552009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78022 ] 00:23:18.477 [2024-11-20 06:21:37.708199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.477 [2024-11-20 06:21:37.791662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.477 [2024-11-20 06:21:38.004367] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.477 [2024-11-20 06:21:38.004423] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.736 [2024-11-20 06:21:38.155929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.156103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.736 [2024-11-20 06:21:38.156125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:18.736 [2024-11-20 06:21:38.156132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.156178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.156187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.736 [2024-11-20 06:21:38.156196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:18.736 [2024-11-20 06:21:38.156202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.156219] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.736 [2024-11-20 06:21:38.156763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.736 [2024-11-20 06:21:38.156776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.156782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.736 [2024-11-20 06:21:38.156789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:23:18.736 [2024-11-20 06:21:38.156795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.157760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.736 [2024-11-20 06:21:38.167832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.167867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.736 [2024-11-20 06:21:38.167878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.073 ms 00:23:18.736 [2024-11-20 06:21:38.167884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.167941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.167949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.736 [2024-11-20 06:21:38.167956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:18.736 [2024-11-20 06:21:38.167962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.172521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.172546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.736 [2024-11-20 06:21:38.172554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.509 ms 00:23:18.736 [2024-11-20 06:21:38.172564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.172620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.172627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.736 [2024-11-20 06:21:38.172633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:18.736 [2024-11-20 06:21:38.172639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.172674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.736 [2024-11-20 06:21:38.172681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.736 [2024-11-20 06:21:38.172687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.736 [2024-11-20 06:21:38.172693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.736 [2024-11-20 06:21:38.172711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:18.736 [2024-11-20 06:21:38.175449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.737 [2024-11-20 06:21:38.175577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.737 [2024-11-20 06:21:38.175591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:23:18.737 [2024-11-20 06:21:38.175600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.737 [2024-11-20 06:21:38.175629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.737 [2024-11-20 06:21:38.175636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.737 [2024-11-20 06:21:38.175642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:18.737 [2024-11-20 06:21:38.175648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.737 [2024-11-20 06:21:38.175663] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.737 [2024-11-20 06:21:38.175677] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.737 [2024-11-20 06:21:38.175704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.737 [2024-11-20 06:21:38.175718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:18.737 [2024-11-20 06:21:38.175801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.737 [2024-11-20 06:21:38.175809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.737 [2024-11-20 06:21:38.175818] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.737 [2024-11-20 06:21:38.175826] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.737 [2024-11-20 06:21:38.175833] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.737 [2024-11-20 06:21:38.175839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:18.737 [2024-11-20 06:21:38.175844] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.737 [2024-11-20 06:21:38.175850] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.737 [2024-11-20 06:21:38.175858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.737 [2024-11-20 06:21:38.175863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.737 [2024-11-20 06:21:38.175869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.737 [2024-11-20 06:21:38.175875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:23:18.737 [2024-11-20 06:21:38.175881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.737 [2024-11-20 06:21:38.175948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.737 [2024-11-20 06:21:38.175954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.737 [2024-11-20 06:21:38.175960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:18.737 [2024-11-20 06:21:38.175966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.737 [2024-11-20 06:21:38.176047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.737 [2024-11-20 06:21:38.176055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.737 [2024-11-20 06:21:38.176062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.737 [2024-11-20 06:21:38.176080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.737 [2024-11-20 06:21:38.176097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.737 [2024-11-20 06:21:38.176108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.737 [2024-11-20 06:21:38.176115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:18.737 [2024-11-20 06:21:38.176120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.737 [2024-11-20 06:21:38.176125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.737 [2024-11-20 06:21:38.176131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:18.737 [2024-11-20 06:21:38.176140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.737 [2024-11-20 06:21:38.176151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.737 [2024-11-20 06:21:38.176166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.737 [2024-11-20 06:21:38.176182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.737 [2024-11-20 06:21:38.176197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.737 [2024-11-20 06:21:38.176212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.737 [2024-11-20 06:21:38.176227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.737 [2024-11-20 06:21:38.176237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.737 [2024-11-20 06:21:38.176242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:18.737 [2024-11-20 06:21:38.176247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.737 [2024-11-20 06:21:38.176253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.737 [2024-11-20 06:21:38.176259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:18.737 [2024-11-20 06:21:38.176263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.737 [2024-11-20 06:21:38.176274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:18.737 [2024-11-20 06:21:38.176278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.737 [2024-11-20 06:21:38.176291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.737 [2024-11-20 06:21:38.176297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.737 [2024-11-20 06:21:38.176308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.737 [2024-11-20 06:21:38.176313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.737 [2024-11-20 06:21:38.176318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.737 [2024-11-20 06:21:38.176323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.737 [2024-11-20 06:21:38.176328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.737 [2024-11-20 06:21:38.176333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.737 [2024-11-20 06:21:38.176339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.737 [2024-11-20 06:21:38.176346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.737 [2024-11-20 06:21:38.176353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:18.737 [2024-11-20 06:21:38.176358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:18.737 [2024-11-20 06:21:38.176364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:18.737 [2024-11-20 06:21:38.176369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:18.737 [2024-11-20 06:21:38.176374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:18.737 [2024-11-20 06:21:38.176380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:18.737 [2024-11-20 06:21:38.176386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:18.737 [2024-11-20 06:21:38.176398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:18.737 [2024-11-20 06:21:38.176403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:18.737 [2024-11-20 06:21:38.176409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:18.737 [2024-11-20 06:21:38.176415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:18.737 [2024-11-20 06:21:38.176421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:18.737 [2024-11-20 06:21:38.176430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:18.738 [2024-11-20 06:21:38.176435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:18.738 [2024-11-20 06:21:38.176441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.738 [2024-11-20 06:21:38.176450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.738 [2024-11-20 06:21:38.176462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.738 [2024-11-20 06:21:38.176469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.738 [2024-11-20 06:21:38.176474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.738 [2024-11-20 06:21:38.176480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.738 [2024-11-20 06:21:38.176486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.176500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.738 [2024-11-20 06:21:38.176507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:23:18.738 [2024-11-20 06:21:38.176512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.197784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.197814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.738 [2024-11-20 06:21:38.197822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:23:18.738 [2024-11-20 06:21:38.197828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.197900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.197906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.738 [2024-11-20 06:21:38.197913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:18.738 [2024-11-20 06:21:38.197918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.236686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.236733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.738 [2024-11-20 06:21:38.236744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.723 ms 00:23:18.738 [2024-11-20 06:21:38.236751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.236800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.236808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.738 [2024-11-20 06:21:38.236819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:18.738 [2024-11-20 06:21:38.236824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.237163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.237177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.738 [2024-11-20 06:21:38.237185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:18.738 [2024-11-20 06:21:38.237191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.237286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.237293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.738 [2024-11-20 06:21:38.237300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:18.738 [2024-11-20 06:21:38.237310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.248027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.248055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.738 [2024-11-20 06:21:38.248063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.700 ms 00:23:18.738 [2024-11-20 06:21:38.248071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.257764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:18.738 [2024-11-20 06:21:38.257794] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.738 [2024-11-20 06:21:38.257803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.257810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.738 [2024-11-20 06:21:38.257817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.643 ms 00:23:18.738 [2024-11-20 06:21:38.257822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.276526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.276648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.738 [2024-11-20 06:21:38.276662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.671 ms 00:23:18.738 [2024-11-20 06:21:38.276669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.285458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.285485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.738 [2024-11-20 06:21:38.285503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:23:18.738 [2024-11-20 06:21:38.285509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.294324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.294353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:18.738 [2024-11-20 06:21:38.294362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:23:18.738 [2024-11-20 06:21:38.294368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.294894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.294915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:18.738 [2024-11-20 06:21:38.294923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:23:18.738 [2024-11-20 06:21:38.294931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.339659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.339827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:18.738 [2024-11-20 06:21:38.339850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.713 ms 00:23:18.738 [2024-11-20 06:21:38.339856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.348110] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:18.738 [2024-11-20 06:21:38.350384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.350410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:18.738 [2024-11-20 06:21:38.350421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.489 ms 00:23:18.738 [2024-11-20 06:21:38.350428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.350519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.350529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:18.738 [2024-11-20 06:21:38.350536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.738 [2024-11-20 06:21:38.350544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.351068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.351085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:18.738 [2024-11-20 06:21:38.351092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:23:18.738 [2024-11-20 06:21:38.351099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.351120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.351127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.738 [2024-11-20 06:21:38.351134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.738 [2024-11-20 06:21:38.351139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.738 [2024-11-20 06:21:38.351172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:18.738 [2024-11-20 06:21:38.351180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.738 [2024-11-20 06:21:38.351187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:18.738 [2024-11-20 06:21:38.351193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:18.738 [2024-11-20 06:21:38.351199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.996 [2024-11-20 06:21:38.370017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.996 [2024-11-20 06:21:38.370052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:18.996 [2024-11-20 06:21:38.370062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:23:18.996 [2024-11-20 06:21:38.370073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.996 [2024-11-20 06:21:38.370141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.996 [2024-11-20 06:21:38.370150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:18.996 [2024-11-20 06:21:38.370157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:18.996 [2024-11-20 06:21:38.370163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.996 [2024-11-20 06:21:38.370989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.702 ms, result 0 00:23:19.926  [2024-11-20T06:21:40.929Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-20T06:21:41.861Z] Copying: 93/1024 [MB] (46 MBps) [2024-11-20T06:21:42.794Z] Copying: 139/1024 [MB] (45 MBps) [2024-11-20T06:21:43.789Z] Copying: 186/1024 [MB] (47 MBps) [2024-11-20T06:21:44.721Z] Copying: 234/1024 [MB] (47 MBps) [2024-11-20T06:21:45.655Z] Copying: 281/1024 [MB] (47 MBps) [2024-11-20T06:21:46.586Z] Copying: 326/1024 [MB] (44 MBps) [2024-11-20T06:21:47.527Z] Copying: 375/1024 [MB] (48 MBps) [2024-11-20T06:21:48.900Z] Copying: 422/1024 [MB] (47 MBps) [2024-11-20T06:21:49.831Z] Copying: 469/1024 [MB] (46 MBps) [2024-11-20T06:21:50.763Z] Copying: 512/1024 [MB] (43 MBps) [2024-11-20T06:21:51.786Z] Copying: 558/1024 [MB] (45 MBps) [2024-11-20T06:21:52.906Z] Copying: 598/1024 [MB] (39 MBps) [2024-11-20T06:21:53.549Z] Copying: 640/1024 [MB] (41 MBps) [2024-11-20T06:21:54.922Z] Copying: 683/1024 [MB] (43 MBps) [2024-11-20T06:21:55.853Z] Copying: 728/1024 [MB] (44 MBps) [2024-11-20T06:21:56.789Z] Copying: 773/1024 [MB] (44 MBps) [2024-11-20T06:21:57.722Z] Copying: 817/1024 [MB] (44 MBps) [2024-11-20T06:21:58.657Z] Copying: 863/1024 [MB] (45 MBps) [2024-11-20T06:21:59.594Z] Copying: 907/1024 [MB] (44 MBps) [2024-11-20T06:22:00.600Z] Copying: 946/1024 [MB] (39 MBps) [2024-11-20T06:22:01.531Z] Copying: 982/1024 [MB] (36 MBps) [2024-11-20T06:22:01.531Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-20 06:22:01.518308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.898 [2024-11-20 06:22:01.518366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.898 [2024-11-20 06:22:01.518381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:41.898 [2024-11-20 06:22:01.518389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.898 [2024-11-20 06:22:01.518409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.898 [2024-11-20 06:22:01.521041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.898 [2024-11-20 06:22:01.521078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.898 [2024-11-20 06:22:01.521094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:23:41.898 [2024-11-20 06:22:01.521102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.898 [2024-11-20 06:22:01.521314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.898 [2024-11-20 06:22:01.521325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:41.898 [2024-11-20 06:22:01.521334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:23:41.898 [2024-11-20 06:22:01.521341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.898 [2024-11-20 06:22:01.524777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.898 [2024-11-20 06:22:01.524798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:41.898 [2024-11-20 06:22:01.524806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:23:41.898 [2024-11-20 06:22:01.524813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.532646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.532674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:42.157 [2024-11-20 06:22:01.532684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.815 ms 00:23:42.157 [2024-11-20 06:22:01.532692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.558729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.558763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:42.157 [2024-11-20 06:22:01.558774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.985 ms 00:23:42.157 [2024-11-20 06:22:01.558782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.573464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.573663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:42.157 [2024-11-20 06:22:01.573682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.661 ms 00:23:42.157 [2024-11-20 06:22:01.573691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.575288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.575326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:42.157 [2024-11-20 06:22:01.575336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:23:42.157 [2024-11-20 06:22:01.575343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.598427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.598463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:42.157 [2024-11-20 06:22:01.598474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:23:42.157 [2024-11-20 06:22:01.598482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.626559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.626650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:42.157 [2024-11-20 06:22:01.626669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.042 ms 00:23:42.157 [2024-11-20 06:22:01.626681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.727886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.727942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:42.157 [2024-11-20 06:22:01.727956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.157 ms 00:23:42.157 [2024-11-20 06:22:01.727964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.750597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.157 [2024-11-20 06:22:01.750648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:42.157 [2024-11-20 06:22:01.750661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.554 ms 00:23:42.157 [2024-11-20 06:22:01.750668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.157 [2024-11-20 06:22:01.750693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:42.157 [2024-11-20 06:22:01.750708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:42.157 [2024-11-20 06:22:01.750726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:23:42.157 [2024-11-20 06:22:01.750735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:42.157 [2024-11-20 06:22:01.750827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.750999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:42.158 [2024-11-20 06:22:01.751439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:42.159 [2024-11-20 06:22:01.751446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:42.159 [2024-11-20 06:22:01.751453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:42.159 [2024-11-20 06:22:01.751461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:42.159 [2024-11-20 06:22:01.751476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:42.159 [2024-11-20 06:22:01.751487] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9774b9d5-ae7a-4993-b44f-33dde2820a77 00:23:42.159 [2024-11-20 06:22:01.751505] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:23:42.159 [2024-11-20 06:22:01.751512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:42.159 [2024-11-20 06:22:01.751519] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:42.159 [2024-11-20 06:22:01.751528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:42.159 [2024-11-20 06:22:01.751534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:42.159 [2024-11-20 06:22:01.751542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:42.159 [2024-11-20 06:22:01.751562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:42.159 [2024-11-20 06:22:01.751569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:42.159 [2024-11-20 06:22:01.751575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:42.159 [2024-11-20 06:22:01.751582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.159 [2024-11-20 06:22:01.751589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:42.159 [2024-11-20 06:22:01.751598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:23:42.159 [2024-11-20 06:22:01.751605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.159 [2024-11-20 06:22:01.763765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.159 [2024-11-20 06:22:01.763806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:42.159 [2024-11-20 06:22:01.763817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:23:42.159 [2024-11-20 06:22:01.763824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.159 [2024-11-20 06:22:01.764172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.159 [2024-11-20 06:22:01.764182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:42.159 [2024-11-20 06:22:01.764194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:23:42.159 [2024-11-20 06:22:01.764201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.796465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.796520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.416 [2024-11-20 06:22:01.796531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.796538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.796603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.796611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.416 [2024-11-20 06:22:01.796623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.796631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.796689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.796698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.416 [2024-11-20 06:22:01.796706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.796714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.796728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.796736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.416 [2024-11-20 06:22:01.796743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.796754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.871940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.871988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.416 [2024-11-20 06:22:01.871999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.872006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.933355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.416 [2024-11-20 06:22:01.933369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.933377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.933454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.416 [2024-11-20 06:22:01.933463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.933470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.933537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.416 [2024-11-20 06:22:01.933545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.933552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.933651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.416 [2024-11-20 06:22:01.933659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.933666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.416 [2024-11-20 06:22:01.933702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:42.416 [2024-11-20 06:22:01.933710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.416 [2024-11-20 06:22:01.933717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.416 [2024-11-20 06:22:01.933752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.417 [2024-11-20 06:22:01.933762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.417 [2024-11-20 06:22:01.933769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.417 [2024-11-20 06:22:01.933776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.417 [2024-11-20 06:22:01.933814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.417 [2024-11-20 06:22:01.933824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.417 [2024-11-20 06:22:01.933832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.417 [2024-11-20 06:22:01.933839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.417 [2024-11-20 06:22:01.933945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.612 ms, result 0 00:23:43.348 00:23:43.348 00:23:43.348 06:22:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:45.908 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:45.908 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:45.908 Process with pid 76172 is not found 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76172 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 76172 ']' 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 76172 00:23:45.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76172) - No such process 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 76172 is not found' 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:45.909 Remove shared memory files 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:45.909 00:23:45.909 real 3m22.001s 00:23:45.909 user 4m17.658s 00:23:45.909 sys 0m34.102s 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:45.909 06:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:45.909 ************************************ 00:23:45.909 END TEST ftl_dirty_shutdown 00:23:45.909 ************************************ 00:23:46.167 06:22:05 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:46.167 06:22:05 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:46.167 06:22:05 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:46.167 06:22:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:46.167 ************************************ 00:23:46.167 START TEST ftl_upgrade_shutdown 00:23:46.167 ************************************ 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:46.167 * Looking for test storage... 00:23:46.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:46.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.167 --rc genhtml_branch_coverage=1 00:23:46.167 --rc genhtml_function_coverage=1 00:23:46.167 --rc genhtml_legend=1 00:23:46.167 --rc geninfo_all_blocks=1 00:23:46.167 --rc geninfo_unexecuted_blocks=1 00:23:46.167 00:23:46.167 ' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:46.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.167 --rc genhtml_branch_coverage=1 00:23:46.167 --rc genhtml_function_coverage=1 00:23:46.167 --rc genhtml_legend=1 00:23:46.167 --rc geninfo_all_blocks=1 00:23:46.167 --rc geninfo_unexecuted_blocks=1 00:23:46.167 00:23:46.167 ' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:46.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.167 --rc genhtml_branch_coverage=1 00:23:46.167 --rc genhtml_function_coverage=1 00:23:46.167 --rc genhtml_legend=1 00:23:46.167 --rc geninfo_all_blocks=1 00:23:46.167 --rc geninfo_unexecuted_blocks=1 00:23:46.167 00:23:46.167 ' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:46.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.167 --rc genhtml_branch_coverage=1 00:23:46.167 --rc genhtml_function_coverage=1 00:23:46.167 --rc genhtml_legend=1 00:23:46.167 --rc geninfo_all_blocks=1 00:23:46.167 --rc geninfo_unexecuted_blocks=1 00:23:46.167 00:23:46.167 ' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:23:46.167 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78384 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78384 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78384 ']' 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:46.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:46.168 06:22:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.425 [2024-11-20 06:22:05.801202] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:46.425 [2024-11-20 06:22:05.801317] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78384 ] 00:23:46.425 [2024-11-20 06:22:05.960375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.684 [2024-11-20 06:22:06.058131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:23:47.249 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:23:47.250 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:47.250 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:23:47.250 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:47.250 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:47.508 06:22:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:23:47.508 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:47.508 { 00:23:47.508 "name": "basen1", 00:23:47.508 "aliases": [ 00:23:47.508 "84b12b23-feea-4b5e-b54d-3ff6a58d49a2" 00:23:47.508 ], 00:23:47.508 "product_name": "NVMe disk", 00:23:47.508 "block_size": 4096, 00:23:47.508 "num_blocks": 1310720, 00:23:47.508 "uuid": "84b12b23-feea-4b5e-b54d-3ff6a58d49a2", 00:23:47.508 "numa_id": -1, 00:23:47.508 "assigned_rate_limits": { 00:23:47.508 "rw_ios_per_sec": 0, 00:23:47.508 "rw_mbytes_per_sec": 0, 00:23:47.508 "r_mbytes_per_sec": 0, 00:23:47.508 "w_mbytes_per_sec": 0 00:23:47.508 }, 00:23:47.508 "claimed": true, 00:23:47.508 "claim_type": "read_many_write_one", 00:23:47.508 "zoned": false, 00:23:47.508 "supported_io_types": { 00:23:47.508 "read": true, 00:23:47.508 "write": true, 00:23:47.508 "unmap": true, 00:23:47.508 "flush": true, 00:23:47.508 "reset": true, 00:23:47.508 "nvme_admin": true, 00:23:47.508 "nvme_io": true, 00:23:47.508 "nvme_io_md": false, 00:23:47.508 "write_zeroes": true, 00:23:47.508 "zcopy": false, 00:23:47.508 "get_zone_info": false, 00:23:47.508 "zone_management": false, 00:23:47.508 "zone_append": false, 00:23:47.508 "compare": true, 00:23:47.508 "compare_and_write": false, 00:23:47.508 "abort": true, 00:23:47.508 "seek_hole": false, 00:23:47.508 "seek_data": false, 00:23:47.508 "copy": true, 00:23:47.508 "nvme_iov_md": false 00:23:47.508 }, 00:23:47.508 "driver_specific": { 00:23:47.508 "nvme": [ 00:23:47.508 { 00:23:47.508 "pci_address": "0000:00:11.0", 00:23:47.508 "trid": { 00:23:47.508 "trtype": "PCIe", 00:23:47.508 "traddr": "0000:00:11.0" 00:23:47.508 }, 00:23:47.508 "ctrlr_data": { 00:23:47.508 "cntlid": 0, 00:23:47.508 "vendor_id": "0x1b36", 00:23:47.508 "model_number": "QEMU NVMe Ctrl", 00:23:47.508 "serial_number": "12341", 00:23:47.508 "firmware_revision": "8.0.0", 00:23:47.508 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:47.508 "oacs": { 00:23:47.508 "security": 0, 00:23:47.508 "format": 1, 00:23:47.508 "firmware": 0, 00:23:47.508 "ns_manage": 1 00:23:47.508 }, 00:23:47.508 "multi_ctrlr": false, 00:23:47.508 "ana_reporting": false 00:23:47.508 }, 00:23:47.508 "vs": { 00:23:47.508 "nvme_version": "1.4" 00:23:47.508 }, 00:23:47.508 "ns_data": { 00:23:47.508 "id": 1, 00:23:47.508 "can_share": false 00:23:47.508 } 00:23:47.508 } 00:23:47.508 ], 00:23:47.508 "mp_policy": "active_passive" 00:23:47.508 } 00:23:47.508 } 00:23:47.508 ]' 00:23:47.508 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ed02ffdd-a8a8-4c8b-9f70-375ac812139f 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:47.768 06:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed02ffdd-a8a8-4c8b-9f70-375ac812139f 00:23:49.152 06:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:23:49.410 06:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f0b31910-ac89-407f-8282-6400b663eef2 00:23:49.410 06:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f0b31910-ac89-407f-8282-6400b663eef2 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0a213a07-3486-4dbb-8d06-ac0b8fbc7140 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0a213a07-3486-4dbb-8d06-ac0b8fbc7140 ]] 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0a213a07-3486-4dbb-8d06-ac0b8fbc7140 5120 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0a213a07-3486-4dbb-8d06-ac0b8fbc7140 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0a213a07-3486-4dbb-8d06-ac0b8fbc7140 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=0a213a07-3486-4dbb-8d06-ac0b8fbc7140 00:23:49.667 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:49.668 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:49.668 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:49.668 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a213a07-3486-4dbb-8d06-ac0b8fbc7140 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:49.925 { 00:23:49.925 "name": "0a213a07-3486-4dbb-8d06-ac0b8fbc7140", 00:23:49.925 "aliases": [ 00:23:49.925 "lvs/basen1p0" 00:23:49.925 ], 00:23:49.925 "product_name": "Logical Volume", 00:23:49.925 "block_size": 4096, 00:23:49.925 "num_blocks": 5242880, 00:23:49.925 "uuid": "0a213a07-3486-4dbb-8d06-ac0b8fbc7140", 00:23:49.925 "assigned_rate_limits": { 00:23:49.925 "rw_ios_per_sec": 0, 00:23:49.925 "rw_mbytes_per_sec": 0, 00:23:49.925 "r_mbytes_per_sec": 0, 00:23:49.925 "w_mbytes_per_sec": 0 00:23:49.925 }, 00:23:49.925 "claimed": false, 00:23:49.925 "zoned": false, 00:23:49.925 "supported_io_types": { 00:23:49.925 "read": true, 00:23:49.925 "write": true, 00:23:49.925 "unmap": true, 00:23:49.925 "flush": false, 00:23:49.925 "reset": true, 00:23:49.925 "nvme_admin": false, 00:23:49.925 "nvme_io": false, 00:23:49.925 "nvme_io_md": false, 00:23:49.925 "write_zeroes": true, 00:23:49.925 "zcopy": false, 00:23:49.925 "get_zone_info": false, 00:23:49.925 "zone_management": false, 00:23:49.925 "zone_append": false, 00:23:49.925 "compare": false, 00:23:49.925 "compare_and_write": false, 00:23:49.925 "abort": false, 00:23:49.925 "seek_hole": true, 00:23:49.925 "seek_data": true, 00:23:49.925 "copy": false, 00:23:49.925 "nvme_iov_md": false 00:23:49.925 }, 00:23:49.925 "driver_specific": { 00:23:49.925 "lvol": { 00:23:49.925 "lvol_store_uuid": "f0b31910-ac89-407f-8282-6400b663eef2", 00:23:49.925 "base_bdev": "basen1", 00:23:49.925 "thin_provision": true, 00:23:49.925 "num_allocated_clusters": 0, 00:23:49.925 "snapshot": false, 00:23:49.925 "clone": false, 00:23:49.925 "esnap_clone": false 00:23:49.925 } 00:23:49.925 } 00:23:49.925 } 00:23:49.925 ]' 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:49.925 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:23:50.182 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:23:50.182 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:23:50.182 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:23:50.440 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:23:50.440 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:23:50.440 06:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0a213a07-3486-4dbb-8d06-ac0b8fbc7140 -c cachen1p0 --l2p_dram_limit 2 00:23:50.440 [2024-11-20 06:22:10.065655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.440 [2024-11-20 06:22:10.065699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:50.440 [2024-11-20 06:22:10.065712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:50.440 [2024-11-20 06:22:10.065719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.440 [2024-11-20 06:22:10.065763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.440 [2024-11-20 06:22:10.065771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:50.440 [2024-11-20 06:22:10.065779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:23:50.440 [2024-11-20 06:22:10.065785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.440 [2024-11-20 06:22:10.065802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:50.440 [2024-11-20 06:22:10.066387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:50.440 [2024-11-20 06:22:10.066403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.440 [2024-11-20 06:22:10.066409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:50.440 [2024-11-20 06:22:10.066417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:23:50.440 [2024-11-20 06:22:10.066424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.440 [2024-11-20 06:22:10.066450] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b6856bd1-e664-4b91-859a-7040bb4e7a02 00:23:50.440 [2024-11-20 06:22:10.067517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.440 [2024-11-20 06:22:10.067552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:23:50.440 [2024-11-20 06:22:10.067564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:23:50.440 [2024-11-20 06:22:10.067576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.072797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.072842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:50.699 [2024-11-20 06:22:10.072857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.083 ms 00:23:50.699 [2024-11-20 06:22:10.072869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.072916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.072930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:50.699 [2024-11-20 06:22:10.072941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:23:50.699 [2024-11-20 06:22:10.072954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.072999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.073012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:50.699 [2024-11-20 06:22:10.073022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:50.699 [2024-11-20 06:22:10.073039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.073064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:50.699 [2024-11-20 06:22:10.077376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.077415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:50.699 [2024-11-20 06:22:10.077430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.317 ms 00:23:50.699 [2024-11-20 06:22:10.077440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.077471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.077481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:50.699 [2024-11-20 06:22:10.077509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:50.699 [2024-11-20 06:22:10.077520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.077571] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:23:50.699 [2024-11-20 06:22:10.077724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:50.699 [2024-11-20 06:22:10.077749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:50.699 [2024-11-20 06:22:10.077763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:50.699 [2024-11-20 06:22:10.077778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:50.699 [2024-11-20 06:22:10.077789] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:50.699 [2024-11-20 06:22:10.077801] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:50.699 [2024-11-20 06:22:10.077810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:50.699 [2024-11-20 06:22:10.077824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:50.699 [2024-11-20 06:22:10.077834] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:50.699 [2024-11-20 06:22:10.077846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.077856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:50.699 [2024-11-20 06:22:10.077869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:23:50.699 [2024-11-20 06:22:10.077878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.699 [2024-11-20 06:22:10.077978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.699 [2024-11-20 06:22:10.077990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:50.699 [2024-11-20 06:22:10.078003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:23:50.699 [2024-11-20 06:22:10.078019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.700 [2024-11-20 06:22:10.078133] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:50.700 [2024-11-20 06:22:10.078145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:50.700 [2024-11-20 06:22:10.078158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:50.700 [2024-11-20 06:22:10.078188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:50.700 [2024-11-20 06:22:10.078208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:50.700 [2024-11-20 06:22:10.078219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:50.700 [2024-11-20 06:22:10.078229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:50.700 [2024-11-20 06:22:10.078249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:50.700 [2024-11-20 06:22:10.078259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:50.700 [2024-11-20 06:22:10.078278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:50.700 [2024-11-20 06:22:10.078287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:50.700 [2024-11-20 06:22:10.078311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:50.700 [2024-11-20 06:22:10.078322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:50.700 [2024-11-20 06:22:10.078343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:50.700 [2024-11-20 06:22:10.078372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:50.700 [2024-11-20 06:22:10.078404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:50.700 [2024-11-20 06:22:10.078432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:50.700 [2024-11-20 06:22:10.078464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:50.700 [2024-11-20 06:22:10.078506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:50.700 [2024-11-20 06:22:10.078540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:50.700 [2024-11-20 06:22:10.078568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:50.700 [2024-11-20 06:22:10.078579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:50.700 [2024-11-20 06:22:10.078601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:50.700 [2024-11-20 06:22:10.078610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:50.700 [2024-11-20 06:22:10.078631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:50.700 [2024-11-20 06:22:10.078643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:50.700 [2024-11-20 06:22:10.078652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:50.700 [2024-11-20 06:22:10.078663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:50.700 [2024-11-20 06:22:10.078671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:50.700 [2024-11-20 06:22:10.078682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:50.700 [2024-11-20 06:22:10.078695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:50.700 [2024-11-20 06:22:10.078708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:50.700 [2024-11-20 06:22:10.078734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:50.700 [2024-11-20 06:22:10.078764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:50.700 [2024-11-20 06:22:10.078775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:50.700 [2024-11-20 06:22:10.078784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:50.700 [2024-11-20 06:22:10.078795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:50.700 [2024-11-20 06:22:10.078871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:50.700 [2024-11-20 06:22:10.078883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:50.700 [2024-11-20 06:22:10.078916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:50.701 [2024-11-20 06:22:10.078925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:50.701 [2024-11-20 06:22:10.078936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:50.701 [2024-11-20 06:22:10.078947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:50.701 [2024-11-20 06:22:10.078959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:50.701 [2024-11-20 06:22:10.078968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:23:50.701 [2024-11-20 06:22:10.078979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:50.701 [2024-11-20 06:22:10.079026] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:50.701 [2024-11-20 06:22:10.079042] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:53.251 [2024-11-20 06:22:12.628553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.251 [2024-11-20 06:22:12.628605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:53.251 [2024-11-20 06:22:12.628620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2549.516 ms 00:23:53.252 [2024-11-20 06:22:12.628630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.654652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.654700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:53.252 [2024-11-20 06:22:12.654713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.830 ms 00:23:53.252 [2024-11-20 06:22:12.654723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.654815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.654829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:53.252 [2024-11-20 06:22:12.654837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:23:53.252 [2024-11-20 06:22:12.654851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.685701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.685741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:53.252 [2024-11-20 06:22:12.685752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.815 ms 00:23:53.252 [2024-11-20 06:22:12.685763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.685799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.685811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:53.252 [2024-11-20 06:22:12.685819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:53.252 [2024-11-20 06:22:12.685828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.686160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.686179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:53.252 [2024-11-20 06:22:12.686188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:23:53.252 [2024-11-20 06:22:12.686197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.686242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.686252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:53.252 [2024-11-20 06:22:12.686263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:53.252 [2024-11-20 06:22:12.686275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.700252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.700288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:53.252 [2024-11-20 06:22:12.700298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.959 ms 00:23:53.252 [2024-11-20 06:22:12.700307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.711520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:53.252 [2024-11-20 06:22:12.712303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.712327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:53.252 [2024-11-20 06:22:12.712340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.898 ms 00:23:53.252 [2024-11-20 06:22:12.712347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.746857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.746925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:23:53.252 [2024-11-20 06:22:12.746944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.475 ms 00:23:53.252 [2024-11-20 06:22:12.746952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.747051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.747065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:53.252 [2024-11-20 06:22:12.747078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:23:53.252 [2024-11-20 06:22:12.747086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.770469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.770524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:23:53.252 [2024-11-20 06:22:12.770539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.334 ms 00:23:53.252 [2024-11-20 06:22:12.770547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.793992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.794036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:23:53.252 [2024-11-20 06:22:12.794049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.383 ms 00:23:53.252 [2024-11-20 06:22:12.794057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.794653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.794674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:53.252 [2024-11-20 06:22:12.794686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:23:53.252 [2024-11-20 06:22:12.794696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.252 [2024-11-20 06:22:12.868243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.252 [2024-11-20 06:22:12.868292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:23:53.252 [2024-11-20 06:22:12.868310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.495 ms 00:23:53.252 [2024-11-20 06:22:12.868318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.893165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.512 [2024-11-20 06:22:12.893220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:23:53.512 [2024-11-20 06:22:12.893242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.746 ms 00:23:53.512 [2024-11-20 06:22:12.893251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.917728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.512 [2024-11-20 06:22:12.917773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:23:53.512 [2024-11-20 06:22:12.917787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.413 ms 00:23:53.512 [2024-11-20 06:22:12.917795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.941602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.512 [2024-11-20 06:22:12.941645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:53.512 [2024-11-20 06:22:12.941659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.752 ms 00:23:53.512 [2024-11-20 06:22:12.941670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.941718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.512 [2024-11-20 06:22:12.941728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:53.512 [2024-11-20 06:22:12.941741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:53.512 [2024-11-20 06:22:12.941751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.941832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:53.512 [2024-11-20 06:22:12.941843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:53.512 [2024-11-20 06:22:12.941856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:23:53.512 [2024-11-20 06:22:12.941863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:53.512 [2024-11-20 06:22:12.942772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2876.666 ms, result 0 00:23:53.512 { 00:23:53.512 "name": "ftl", 00:23:53.512 "uuid": "b6856bd1-e664-4b91-859a-7040bb4e7a02" 00:23:53.512 } 00:23:53.512 06:22:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:23:53.771 [2024-11-20 06:22:13.150157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.771 06:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:23:53.771 06:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:23:54.029 [2024-11-20 06:22:13.570606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:54.029 06:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:23:54.287 [2024-11-20 06:22:13.774966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:54.287 06:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:54.545 Fill FTL, iteration 1 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=78507 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 78507 /var/tmp/spdk.tgt.sock 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78507 ']' 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:54.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:54.545 06:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:54.803 [2024-11-20 06:22:14.184073] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:54.803 [2024-11-20 06:22:14.184218] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78507 ] 00:23:54.803 [2024-11-20 06:22:14.346240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.061 [2024-11-20 06:22:14.463856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.627 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.627 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:55.627 06:22:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:23:55.977 ftln1 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 78507 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78507 ']' 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 78507 00:23:55.977 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78507 00:23:56.237 killing process with pid 78507 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78507' 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 78507 00:23:56.237 06:22:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 78507 00:23:57.613 06:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:23:57.613 06:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:57.613 [2024-11-20 06:22:17.232220] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:23:57.613 [2024-11-20 06:22:17.232411] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78553 ] 00:23:57.870 [2024-11-20 06:22:17.414056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.128 [2024-11-20 06:22:17.530306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.514  [2024-11-20T06:22:20.107Z] Copying: 265/1024 [MB] (265 MBps) [2024-11-20T06:22:21.040Z] Copying: 540/1024 [MB] (275 MBps) [2024-11-20T06:22:21.973Z] Copying: 820/1024 [MB] (280 MBps) [2024-11-20T06:22:22.540Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:24:02.907 00:24:02.907 Calculate MD5 checksum, iteration 1 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:02.907 06:22:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:02.907 [2024-11-20 06:22:22.343673] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:02.907 [2024-11-20 06:22:22.343792] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78608 ] 00:24:02.907 [2024-11-20 06:22:22.499791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.166 [2024-11-20 06:22:22.580738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.541  [2024-11-20T06:22:24.433Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-20T06:22:24.999Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:24:05.366 00:24:05.366 06:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:05.366 06:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:07.950 Fill FTL, iteration 2 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9a706539f4a4ca6de3a0bce60d848e9c 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:07.950 06:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:07.950 [2024-11-20 06:22:27.077870] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:07.950 [2024-11-20 06:22:27.077975] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78664 ] 00:24:07.950 [2024-11-20 06:22:27.238780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.950 [2024-11-20 06:22:27.336236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.323  [2024-11-20T06:22:29.930Z] Copying: 218/1024 [MB] (218 MBps) [2024-11-20T06:22:30.884Z] Copying: 428/1024 [MB] (210 MBps) [2024-11-20T06:22:31.815Z] Copying: 677/1024 [MB] (249 MBps) [2024-11-20T06:22:32.129Z] Copying: 940/1024 [MB] (263 MBps) [2024-11-20T06:22:32.693Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:24:13.060 00:24:13.060 Calculate MD5 checksum, iteration 2 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:13.060 06:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:13.060 [2024-11-20 06:22:32.670618] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:13.060 [2024-11-20 06:22:32.670733] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78722 ] 00:24:13.316 [2024-11-20 06:22:32.824958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.316 [2024-11-20 06:22:32.908394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.210  [2024-11-20T06:22:35.101Z] Copying: 652/1024 [MB] (652 MBps) [2024-11-20T06:22:38.378Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:24:18.745 00:24:18.745 06:22:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:24:18.745 06:22:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:20.646 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:20.646 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e7524cb999f4d10186c171f4e3aeba56 00:24:20.646 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:20.646 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:20.646 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:20.646 [2024-11-20 06:22:40.266671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:20.646 [2024-11-20 06:22:40.266719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:20.646 [2024-11-20 06:22:40.266731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:20.646 [2024-11-20 06:22:40.266738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:20.646 [2024-11-20 06:22:40.266757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:20.646 [2024-11-20 06:22:40.266764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:20.646 [2024-11-20 06:22:40.266773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:20.646 [2024-11-20 06:22:40.266779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:20.646 [2024-11-20 06:22:40.266795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:20.646 [2024-11-20 06:22:40.266801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:20.646 [2024-11-20 06:22:40.266808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:20.646 [2024-11-20 06:22:40.266813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:20.646 [2024-11-20 06:22:40.266862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:24:20.646 true 00:24:20.904 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:20.904 { 00:24:20.904 "name": "ftl", 00:24:20.904 "properties": [ 00:24:20.904 { 00:24:20.904 "name": "superblock_version", 00:24:20.904 "value": 5, 00:24:20.904 "read-only": true 00:24:20.904 }, 00:24:20.904 { 00:24:20.905 "name": "base_device", 00:24:20.905 "bands": [ 00:24:20.905 { 00:24:20.905 "id": 0, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 1, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 2, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 3, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 4, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 5, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 6, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 7, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 8, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 9, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 10, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 11, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 12, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 13, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 14, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 15, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 16, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 17, 00:24:20.905 "state": "FREE", 00:24:20.905 "validity": 0.0 00:24:20.905 } 00:24:20.905 ], 00:24:20.905 "read-only": true 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "name": "cache_device", 00:24:20.905 "type": "bdev", 00:24:20.905 "chunks": [ 00:24:20.905 { 00:24:20.905 "id": 0, 00:24:20.905 "state": "INACTIVE", 00:24:20.905 "utilization": 0.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 1, 00:24:20.905 "state": "CLOSED", 00:24:20.905 "utilization": 1.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 2, 00:24:20.905 "state": "CLOSED", 00:24:20.905 "utilization": 1.0 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 3, 00:24:20.905 "state": "OPEN", 00:24:20.905 "utilization": 0.001953125 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "id": 4, 00:24:20.905 "state": "OPEN", 00:24:20.905 "utilization": 0.0 00:24:20.905 } 00:24:20.905 ], 00:24:20.905 "read-only": true 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "name": "verbose_mode", 00:24:20.905 "value": true, 00:24:20.905 "unit": "", 00:24:20.905 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:20.905 }, 00:24:20.905 { 00:24:20.905 "name": "prep_upgrade_on_shutdown", 00:24:20.905 "value": false, 00:24:20.905 "unit": "", 00:24:20.905 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:20.905 } 00:24:20.905 ] 00:24:20.905 } 00:24:20.905 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:24:21.164 [2024-11-20 06:22:40.695064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.164 [2024-11-20 06:22:40.695119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:21.164 [2024-11-20 06:22:40.695130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:21.164 [2024-11-20 06:22:40.695136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.164 [2024-11-20 06:22:40.695156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.164 [2024-11-20 06:22:40.695163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:21.164 [2024-11-20 06:22:40.695169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:21.164 [2024-11-20 06:22:40.695176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.164 [2024-11-20 06:22:40.695191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.164 [2024-11-20 06:22:40.695197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:21.164 [2024-11-20 06:22:40.695203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:21.164 [2024-11-20 06:22:40.695209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.164 [2024-11-20 06:22:40.695256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.187 ms, result 0 00:24:21.164 true 00:24:21.164 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:24:21.164 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:21.164 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:21.423 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:24:21.423 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:24:21.423 06:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:21.682 [2024-11-20 06:22:41.099395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.682 [2024-11-20 06:22:41.099431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:21.682 [2024-11-20 06:22:41.099442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:21.682 [2024-11-20 06:22:41.099447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.682 [2024-11-20 06:22:41.099464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.682 [2024-11-20 06:22:41.099471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:21.682 [2024-11-20 06:22:41.099476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:21.682 [2024-11-20 06:22:41.099482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.682 [2024-11-20 06:22:41.099506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:21.682 [2024-11-20 06:22:41.099512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:21.682 [2024-11-20 06:22:41.099518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:21.682 [2024-11-20 06:22:41.099524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:21.682 [2024-11-20 06:22:41.099567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.166 ms, result 0 00:24:21.682 true 00:24:21.682 06:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:21.682 { 00:24:21.682 "name": "ftl", 00:24:21.682 "properties": [ 00:24:21.682 { 00:24:21.682 "name": "superblock_version", 00:24:21.682 "value": 5, 00:24:21.682 "read-only": true 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "name": "base_device", 00:24:21.682 "bands": [ 00:24:21.682 { 00:24:21.682 "id": 0, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 1, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 2, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 3, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 4, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 5, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 6, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 7, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 8, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 9, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 10, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 11, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 12, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 13, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 14, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 15, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 16, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 17, 00:24:21.682 "state": "FREE", 00:24:21.682 "validity": 0.0 00:24:21.682 } 00:24:21.682 ], 00:24:21.682 "read-only": true 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "name": "cache_device", 00:24:21.682 "type": "bdev", 00:24:21.682 "chunks": [ 00:24:21.682 { 00:24:21.682 "id": 0, 00:24:21.682 "state": "INACTIVE", 00:24:21.682 "utilization": 0.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 1, 00:24:21.682 "state": "CLOSED", 00:24:21.682 "utilization": 1.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 2, 00:24:21.682 "state": "CLOSED", 00:24:21.682 "utilization": 1.0 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 3, 00:24:21.682 "state": "OPEN", 00:24:21.682 "utilization": 0.001953125 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "id": 4, 00:24:21.682 "state": "OPEN", 00:24:21.682 "utilization": 0.0 00:24:21.682 } 00:24:21.682 ], 00:24:21.682 "read-only": true 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "name": "verbose_mode", 00:24:21.682 "value": true, 00:24:21.682 "unit": "", 00:24:21.682 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:21.682 }, 00:24:21.682 { 00:24:21.682 "name": "prep_upgrade_on_shutdown", 00:24:21.682 "value": true, 00:24:21.682 "unit": "", 00:24:21.682 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:21.682 } 00:24:21.682 ] 00:24:21.682 } 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78384 ]] 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78384 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78384 ']' 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 78384 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78384 00:24:21.940 killing process with pid 78384 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:21.940 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78384' 00:24:21.941 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 78384 00:24:21.941 06:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 78384 00:24:22.507 [2024-11-20 06:22:41.901978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:22.507 [2024-11-20 06:22:41.914822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.507 [2024-11-20 06:22:41.914870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:22.507 [2024-11-20 06:22:41.914880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:22.507 [2024-11-20 06:22:41.914887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.507 [2024-11-20 06:22:41.914911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:22.507 [2024-11-20 06:22:41.916987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.507 [2024-11-20 06:22:41.917015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:22.507 [2024-11-20 06:22:41.917024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.064 ms 00:24:22.507 [2024-11-20 06:22:41.917030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.810361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.810424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:30.669 [2024-11-20 06:22:49.810438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7893.278 ms 00:24:30.669 [2024-11-20 06:22:49.810451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.811948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.811975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:30.669 [2024-11-20 06:22:49.811985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.480 ms 00:24:30.669 [2024-11-20 06:22:49.811993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.813130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.813153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:30.669 [2024-11-20 06:22:49.813163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:24:30.669 [2024-11-20 06:22:49.813171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.823614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.823649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:30.669 [2024-11-20 06:22:49.823659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.403 ms 00:24:30.669 [2024-11-20 06:22:49.823668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.830490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.830527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:30.669 [2024-11-20 06:22:49.830537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.791 ms 00:24:30.669 [2024-11-20 06:22:49.830546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.830635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.830645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:30.669 [2024-11-20 06:22:49.830654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:24:30.669 [2024-11-20 06:22:49.830665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.669 [2024-11-20 06:22:49.840912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.669 [2024-11-20 06:22:49.840940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:24:30.669 [2024-11-20 06:22:49.840950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.232 ms 00:24:30.669 [2024-11-20 06:22:49.840958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.861649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.861678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:24:30.670 [2024-11-20 06:22:49.861688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.661 ms 00:24:30.670 [2024-11-20 06:22:49.861695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.871538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.871569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:30.670 [2024-11-20 06:22:49.871577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.813 ms 00:24:30.670 [2024-11-20 06:22:49.871584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.881005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.881036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:30.670 [2024-11-20 06:22:49.881044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.363 ms 00:24:30.670 [2024-11-20 06:22:49.881052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.881081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:30.670 [2024-11-20 06:22:49.881094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:30.670 [2024-11-20 06:22:49.881104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:30.670 [2024-11-20 06:22:49.881121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:30.670 [2024-11-20 06:22:49.881130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:30.670 [2024-11-20 06:22:49.881248] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:30.670 [2024-11-20 06:22:49.881256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b6856bd1-e664-4b91-859a-7040bb4e7a02 00:24:30.670 [2024-11-20 06:22:49.881264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:30.670 [2024-11-20 06:22:49.881271] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:24:30.670 [2024-11-20 06:22:49.881278] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:24:30.670 [2024-11-20 06:22:49.881287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:24:30.670 [2024-11-20 06:22:49.881294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:30.670 [2024-11-20 06:22:49.881301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:30.670 [2024-11-20 06:22:49.881310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:30.670 [2024-11-20 06:22:49.881317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:30.670 [2024-11-20 06:22:49.881323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:30.670 [2024-11-20 06:22:49.881334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.881341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:30.670 [2024-11-20 06:22:49.881352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:24:30.670 [2024-11-20 06:22:49.881359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.893667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.893696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:30.670 [2024-11-20 06:22:49.893706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.293 ms 00:24:30.670 [2024-11-20 06:22:49.893713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.894051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.670 [2024-11-20 06:22:49.894066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:30.670 [2024-11-20 06:22:49.894074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:24:30.670 [2024-11-20 06:22:49.894081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.935125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:49.935164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:30.670 [2024-11-20 06:22:49.935176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:49.935188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.935222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:49.935232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:30.670 [2024-11-20 06:22:49.935240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:49.935247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.935317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:49.935328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:30.670 [2024-11-20 06:22:49.935336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:49.935343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:49.935362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:49.935369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:30.670 [2024-11-20 06:22:49.935377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:49.935384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.012212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.012270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:30.670 [2024-11-20 06:22:50.012281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.012294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:30.670 [2024-11-20 06:22:50.075605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.075613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:30.670 [2024-11-20 06:22:50.075708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.075715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:30.670 [2024-11-20 06:22:50.075775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.075782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:30.670 [2024-11-20 06:22:50.075881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.075889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:30.670 [2024-11-20 06:22:50.075940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.075947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.075980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.075988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:30.670 [2024-11-20 06:22:50.075996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.076004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.076045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:30.670 [2024-11-20 06:22:50.076058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:30.670 [2024-11-20 06:22:50.076066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:30.670 [2024-11-20 06:22:50.076075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.670 [2024-11-20 06:22:50.076187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8161.311 ms, result 0 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78938 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78938 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78938 ']' 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:34.848 06:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:34.848 [2024-11-20 06:22:54.041157] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:34.848 [2024-11-20 06:22:54.041468] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78938 ] 00:24:34.848 [2024-11-20 06:22:54.199270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.848 [2024-11-20 06:22:54.300179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.415 [2024-11-20 06:22:54.990589] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:35.415 [2024-11-20 06:22:54.990826] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:35.674 [2024-11-20 06:22:55.134927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.674 [2024-11-20 06:22:55.134986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:35.674 [2024-11-20 06:22:55.135000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:35.674 [2024-11-20 06:22:55.135008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.674 [2024-11-20 06:22:55.135062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.674 [2024-11-20 06:22:55.135072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:35.674 [2024-11-20 06:22:55.135080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:24:35.674 [2024-11-20 06:22:55.135089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.674 [2024-11-20 06:22:55.135114] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:35.675 [2024-11-20 06:22:55.135816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:35.675 [2024-11-20 06:22:55.135837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.135844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:35.675 [2024-11-20 06:22:55.135853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.731 ms 00:24:35.675 [2024-11-20 06:22:55.135860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.137089] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:35.675 [2024-11-20 06:22:55.149736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.149958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:35.675 [2024-11-20 06:22:55.149982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.647 ms 00:24:35.675 [2024-11-20 06:22:55.149990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.150115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.150128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:35.675 [2024-11-20 06:22:55.150138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:24:35.675 [2024-11-20 06:22:55.150146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.155586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.155620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:35.675 [2024-11-20 06:22:55.155630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.363 ms 00:24:35.675 [2024-11-20 06:22:55.155638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.155700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.155710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:35.675 [2024-11-20 06:22:55.155719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:24:35.675 [2024-11-20 06:22:55.155726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.155778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.155789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:35.675 [2024-11-20 06:22:55.155801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:35.675 [2024-11-20 06:22:55.155808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.155833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:35.675 [2024-11-20 06:22:55.159200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.159232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:35.675 [2024-11-20 06:22:55.159242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.373 ms 00:24:35.675 [2024-11-20 06:22:55.159252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.159279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.159289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:35.675 [2024-11-20 06:22:55.159297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:35.675 [2024-11-20 06:22:55.159304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.159327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:35.675 [2024-11-20 06:22:55.159347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:35.675 [2024-11-20 06:22:55.159384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:35.675 [2024-11-20 06:22:55.159399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:24:35.675 [2024-11-20 06:22:55.159517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:35.675 [2024-11-20 06:22:55.159529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:35.675 [2024-11-20 06:22:55.159541] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:35.675 [2024-11-20 06:22:55.159552] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:35.675 [2024-11-20 06:22:55.159561] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:35.675 [2024-11-20 06:22:55.159572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:35.675 [2024-11-20 06:22:55.159579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:35.675 [2024-11-20 06:22:55.159586] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:35.675 [2024-11-20 06:22:55.159594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:35.675 [2024-11-20 06:22:55.159603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.159610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:35.675 [2024-11-20 06:22:55.159617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:24:35.675 [2024-11-20 06:22:55.159625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.159709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.675 [2024-11-20 06:22:55.159718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:35.675 [2024-11-20 06:22:55.159726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:24:35.675 [2024-11-20 06:22:55.159735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.675 [2024-11-20 06:22:55.159849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:35.675 [2024-11-20 06:22:55.159861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:35.675 [2024-11-20 06:22:55.159869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:35.675 [2024-11-20 06:22:55.159877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:35.675 [2024-11-20 06:22:55.159891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:35.675 [2024-11-20 06:22:55.159907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:35.675 [2024-11-20 06:22:55.159914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:35.675 [2024-11-20 06:22:55.159922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:35.675 [2024-11-20 06:22:55.159935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:35.675 [2024-11-20 06:22:55.159942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:35.675 [2024-11-20 06:22:55.159961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:35.675 [2024-11-20 06:22:55.159968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:35.675 [2024-11-20 06:22:55.159981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:35.675 [2024-11-20 06:22:55.159988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.159995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:35.675 [2024-11-20 06:22:55.160001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:35.675 [2024-11-20 06:22:55.160008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:35.675 [2024-11-20 06:22:55.160021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:35.675 [2024-11-20 06:22:55.160027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:35.675 [2024-11-20 06:22:55.160048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:35.675 [2024-11-20 06:22:55.160054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:35.675 [2024-11-20 06:22:55.160067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:35.675 [2024-11-20 06:22:55.160074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:35.675 [2024-11-20 06:22:55.160087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:35.675 [2024-11-20 06:22:55.160093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:35.675 [2024-11-20 06:22:55.160108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:35.675 [2024-11-20 06:22:55.160127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:35.675 [2024-11-20 06:22:55.160145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:35.675 [2024-11-20 06:22:55.160151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160157] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:35.675 [2024-11-20 06:22:55.160165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:35.675 [2024-11-20 06:22:55.160173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:35.675 [2024-11-20 06:22:55.160182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:35.675 [2024-11-20 06:22:55.160191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:35.676 [2024-11-20 06:22:55.160198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:35.676 [2024-11-20 06:22:55.160205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:35.676 [2024-11-20 06:22:55.160212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:35.676 [2024-11-20 06:22:55.160219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:35.676 [2024-11-20 06:22:55.160226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:35.676 [2024-11-20 06:22:55.160234] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:35.676 [2024-11-20 06:22:55.160244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:35.676 [2024-11-20 06:22:55.160259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:35.676 [2024-11-20 06:22:55.160280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:35.676 [2024-11-20 06:22:55.160287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:35.676 [2024-11-20 06:22:55.160294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:35.676 [2024-11-20 06:22:55.160301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:35.676 [2024-11-20 06:22:55.160352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:35.676 [2024-11-20 06:22:55.160361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:35.676 [2024-11-20 06:22:55.160377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:35.676 [2024-11-20 06:22:55.160385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:35.676 [2024-11-20 06:22:55.160392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:35.676 [2024-11-20 06:22:55.160399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:35.676 [2024-11-20 06:22:55.160406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:35.676 [2024-11-20 06:22:55.160414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:24:35.676 [2024-11-20 06:22:55.160420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:35.676 [2024-11-20 06:22:55.160469] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:35.676 [2024-11-20 06:22:55.160479] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:38.205 [2024-11-20 06:22:57.686422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.686480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:38.205 [2024-11-20 06:22:57.686518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2525.942 ms 00:24:38.205 [2024-11-20 06:22:57.686528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.711650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.711698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:38.205 [2024-11-20 06:22:57.711711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.869 ms 00:24:38.205 [2024-11-20 06:22:57.711719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.711803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.711819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:38.205 [2024-11-20 06:22:57.711827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:38.205 [2024-11-20 06:22:57.711835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.742013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.742054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:38.205 [2024-11-20 06:22:57.742065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.140 ms 00:24:38.205 [2024-11-20 06:22:57.742075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.742108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.742116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:38.205 [2024-11-20 06:22:57.742125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:38.205 [2024-11-20 06:22:57.742132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.742533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.742549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:38.205 [2024-11-20 06:22:57.742558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.315 ms 00:24:38.205 [2024-11-20 06:22:57.742566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.742614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.742623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:38.205 [2024-11-20 06:22:57.742631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:24:38.205 [2024-11-20 06:22:57.742638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.756457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.756507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:38.205 [2024-11-20 06:22:57.756518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.799 ms 00:24:38.205 [2024-11-20 06:22:57.756526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.768871] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:38.205 [2024-11-20 06:22:57.768907] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:38.205 [2024-11-20 06:22:57.768918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.768926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:24:38.205 [2024-11-20 06:22:57.768936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.286 ms 00:24:38.205 [2024-11-20 06:22:57.768943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.782479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.782523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:24:38.205 [2024-11-20 06:22:57.782535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.496 ms 00:24:38.205 [2024-11-20 06:22:57.782542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.793940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.794090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:24:38.205 [2024-11-20 06:22:57.794106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.359 ms 00:24:38.205 [2024-11-20 06:22:57.794114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.805319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.805456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:24:38.205 [2024-11-20 06:22:57.805471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.174 ms 00:24:38.205 [2024-11-20 06:22:57.805479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.205 [2024-11-20 06:22:57.806101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.205 [2024-11-20 06:22:57.806126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:38.205 [2024-11-20 06:22:57.806136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:24:38.205 [2024-11-20 06:22:57.806143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.875852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.875910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:38.463 [2024-11-20 06:22:57.875925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.688 ms 00:24:38.463 [2024-11-20 06:22:57.875933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.886612] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:38.463 [2024-11-20 06:22:57.887391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.887408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:38.463 [2024-11-20 06:22:57.887418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.401 ms 00:24:38.463 [2024-11-20 06:22:57.887426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.887548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.887562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:24:38.463 [2024-11-20 06:22:57.887571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:38.463 [2024-11-20 06:22:57.887579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.887634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.887644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:38.463 [2024-11-20 06:22:57.887652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:38.463 [2024-11-20 06:22:57.887659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.887679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.887687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:38.463 [2024-11-20 06:22:57.887697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:38.463 [2024-11-20 06:22:57.887704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.463 [2024-11-20 06:22:57.887735] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:38.463 [2024-11-20 06:22:57.887744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.463 [2024-11-20 06:22:57.887752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:38.463 [2024-11-20 06:22:57.887759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:38.463 [2024-11-20 06:22:57.887766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.464 [2024-11-20 06:22:57.910868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.464 [2024-11-20 06:22:57.910904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:38.464 [2024-11-20 06:22:57.910921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.084 ms 00:24:38.464 [2024-11-20 06:22:57.910930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.464 [2024-11-20 06:22:57.911004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.464 [2024-11-20 06:22:57.911014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:38.464 [2024-11-20 06:22:57.911022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:24:38.464 [2024-11-20 06:22:57.911029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.464 [2024-11-20 06:22:57.911957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2776.626 ms, result 0 00:24:38.464 [2024-11-20 06:22:57.927219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.464 [2024-11-20 06:22:57.943203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:38.464 [2024-11-20 06:22:57.951324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:38.831 06:22:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.831 06:22:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:38.831 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:38.831 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:38.831 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:39.092 [2024-11-20 06:22:58.535867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:39.092 [2024-11-20 06:22:58.535919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:39.092 [2024-11-20 06:22:58.535932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:39.092 [2024-11-20 06:22:58.535943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:39.092 [2024-11-20 06:22:58.535964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:39.092 [2024-11-20 06:22:58.535973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:39.092 [2024-11-20 06:22:58.535982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:39.092 [2024-11-20 06:22:58.535989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:39.092 [2024-11-20 06:22:58.536009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:39.092 [2024-11-20 06:22:58.536017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:39.092 [2024-11-20 06:22:58.536025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:39.092 [2024-11-20 06:22:58.536032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:39.092 [2024-11-20 06:22:58.536091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.216 ms, result 0 00:24:39.092 true 00:24:39.092 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:39.351 { 00:24:39.351 "name": "ftl", 00:24:39.351 "properties": [ 00:24:39.351 { 00:24:39.351 "name": "superblock_version", 00:24:39.351 "value": 5, 00:24:39.351 "read-only": true 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "name": "base_device", 00:24:39.351 "bands": [ 00:24:39.351 { 00:24:39.351 "id": 0, 00:24:39.351 "state": "CLOSED", 00:24:39.351 "validity": 1.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 1, 00:24:39.351 "state": "CLOSED", 00:24:39.351 "validity": 1.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 2, 00:24:39.351 "state": "CLOSED", 00:24:39.351 "validity": 0.007843137254901933 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 3, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 4, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 5, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 6, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 7, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 8, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 9, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 10, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 11, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 12, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 13, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 14, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 15, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 16, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 17, 00:24:39.351 "state": "FREE", 00:24:39.351 "validity": 0.0 00:24:39.351 } 00:24:39.351 ], 00:24:39.351 "read-only": true 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "name": "cache_device", 00:24:39.351 "type": "bdev", 00:24:39.351 "chunks": [ 00:24:39.351 { 00:24:39.351 "id": 0, 00:24:39.351 "state": "INACTIVE", 00:24:39.351 "utilization": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 1, 00:24:39.351 "state": "OPEN", 00:24:39.351 "utilization": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 2, 00:24:39.351 "state": "OPEN", 00:24:39.351 "utilization": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 3, 00:24:39.351 "state": "FREE", 00:24:39.351 "utilization": 0.0 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "id": 4, 00:24:39.351 "state": "FREE", 00:24:39.351 "utilization": 0.0 00:24:39.351 } 00:24:39.351 ], 00:24:39.351 "read-only": true 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "name": "verbose_mode", 00:24:39.351 "value": true, 00:24:39.351 "unit": "", 00:24:39.351 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:39.351 }, 00:24:39.351 { 00:24:39.351 "name": "prep_upgrade_on_shutdown", 00:24:39.352 "value": false, 00:24:39.352 "unit": "", 00:24:39.352 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:39.352 } 00:24:39.352 ] 00:24:39.352 } 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:39.352 06:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:24:39.609 Validate MD5 checksum, iteration 1 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:39.609 06:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:39.609 [2024-11-20 06:22:59.230647] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:39.609 [2024-11-20 06:22:59.230929] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79012 ] 00:24:39.867 [2024-11-20 06:22:59.390511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.867 [2024-11-20 06:22:59.493307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.764  [2024-11-20T06:23:01.654Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-20T06:23:03.026Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:24:43.393 00:24:43.393 06:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:43.393 06:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:45.292 Validate MD5 checksum, iteration 2 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9a706539f4a4ca6de3a0bce60d848e9c 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9a706539f4a4ca6de3a0bce60d848e9c != \9\a\7\0\6\5\3\9\f\4\a\4\c\a\6\d\e\3\a\0\b\c\e\6\0\d\8\4\8\e\9\c ]] 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:45.292 06:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:45.550 [2024-11-20 06:23:04.926974] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:45.550 [2024-11-20 06:23:04.927243] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79078 ] 00:24:45.550 [2024-11-20 06:23:05.082224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.550 [2024-11-20 06:23:05.181133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.485  [2024-11-20T06:23:07.375Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-20T06:23:11.553Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:24:51.920 00:24:51.920 06:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:51.920 06:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e7524cb999f4d10186c171f4e3aeba56 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e7524cb999f4d10186c171f4e3aeba56 != \e\7\5\2\4\c\b\9\9\9\f\4\d\1\0\1\8\6\c\1\7\1\f\4\e\3\a\e\b\a\5\6 ]] 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78938 ]] 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78938 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79168 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79168 00:24:53.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79168 ']' 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:53.818 06:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.818 [2024-11-20 06:23:13.283084] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:53.818 [2024-11-20 06:23:13.283365] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79168 ] 00:24:53.818 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 78938 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:53.818 [2024-11-20 06:23:13.438886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.075 [2024-11-20 06:23:13.518429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.642 [2024-11-20 06:23:14.093674] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:54.642 [2024-11-20 06:23:14.093849] bdev.c:8413:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:54.642 [2024-11-20 06:23:14.236779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.236933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:54.642 [2024-11-20 06:23:14.236996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:54.642 [2024-11-20 06:23:14.237020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.237088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.237114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:54.642 [2024-11-20 06:23:14.237133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:24:54.642 [2024-11-20 06:23:14.237152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.237189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:54.642 [2024-11-20 06:23:14.237971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:54.642 [2024-11-20 06:23:14.238075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.238144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:54.642 [2024-11-20 06:23:14.238168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:24:54.642 [2024-11-20 06:23:14.238187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.238666] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:54.642 [2024-11-20 06:23:14.253969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.254001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:54.642 [2024-11-20 06:23:14.254014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.303 ms 00:24:54.642 [2024-11-20 06:23:14.254021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.262727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.262757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:54.642 [2024-11-20 06:23:14.262769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:24:54.642 [2024-11-20 06:23:14.262776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.263083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.263104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:54.642 [2024-11-20 06:23:14.263112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:24:54.642 [2024-11-20 06:23:14.263120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.263167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.263176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:54.642 [2024-11-20 06:23:14.263184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:24:54.642 [2024-11-20 06:23:14.263192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.263216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.263225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:54.642 [2024-11-20 06:23:14.263232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:54.642 [2024-11-20 06:23:14.263239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.263258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:54.642 [2024-11-20 06:23:14.266150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.266177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:54.642 [2024-11-20 06:23:14.266186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.896 ms 00:24:54.642 [2024-11-20 06:23:14.266193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.266222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.642 [2024-11-20 06:23:14.266230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:54.642 [2024-11-20 06:23:14.266238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:54.642 [2024-11-20 06:23:14.266245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.642 [2024-11-20 06:23:14.266264] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:54.642 [2024-11-20 06:23:14.266280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:54.642 [2024-11-20 06:23:14.266313] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:54.642 [2024-11-20 06:23:14.266330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:24:54.642 [2024-11-20 06:23:14.266430] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:54.642 [2024-11-20 06:23:14.266440] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:54.642 [2024-11-20 06:23:14.266450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:54.642 [2024-11-20 06:23:14.266460] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:54.642 [2024-11-20 06:23:14.266468] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:54.642 [2024-11-20 06:23:14.266476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:54.642 [2024-11-20 06:23:14.266483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:54.642 [2024-11-20 06:23:14.266510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:54.642 [2024-11-20 06:23:14.266518] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:54.643 [2024-11-20 06:23:14.266527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.643 [2024-11-20 06:23:14.266536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:54.643 [2024-11-20 06:23:14.266544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:24:54.643 [2024-11-20 06:23:14.266551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.643 [2024-11-20 06:23:14.266636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.643 [2024-11-20 06:23:14.266644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:54.643 [2024-11-20 06:23:14.266652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:24:54.643 [2024-11-20 06:23:14.266658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.643 [2024-11-20 06:23:14.266758] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:54.643 [2024-11-20 06:23:14.266768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:54.643 [2024-11-20 06:23:14.266778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:54.643 [2024-11-20 06:23:14.266786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:54.643 [2024-11-20 06:23:14.266800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:54.643 [2024-11-20 06:23:14.266814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:54.643 [2024-11-20 06:23:14.266820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:54.643 [2024-11-20 06:23:14.266826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:54.643 [2024-11-20 06:23:14.266840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:54.643 [2024-11-20 06:23:14.266846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:54.643 [2024-11-20 06:23:14.266860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:54.643 [2024-11-20 06:23:14.266867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:54.643 [2024-11-20 06:23:14.266880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:54.643 [2024-11-20 06:23:14.266886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.266893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:54.643 [2024-11-20 06:23:14.266899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:54.643 [2024-11-20 06:23:14.266905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.643 [2024-11-20 06:23:14.266921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:54.643 [2024-11-20 06:23:14.266933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:54.643 [2024-11-20 06:23:14.266939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.643 [2024-11-20 06:23:14.266946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:54.643 [2024-11-20 06:23:14.266952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:54.643 [2024-11-20 06:23:14.266959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.643 [2024-11-20 06:23:14.266965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:54.643 [2024-11-20 06:23:14.266971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:54.643 [2024-11-20 06:23:14.266978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.643 [2024-11-20 06:23:14.266984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:54.643 [2024-11-20 06:23:14.266990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:54.643 [2024-11-20 06:23:14.266996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:54.643 [2024-11-20 06:23:14.267009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:54.643 [2024-11-20 06:23:14.267015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:54.643 [2024-11-20 06:23:14.267028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:54.643 [2024-11-20 06:23:14.267047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:54.643 [2024-11-20 06:23:14.267053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267060] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:54.643 [2024-11-20 06:23:14.267067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:54.643 [2024-11-20 06:23:14.267074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:54.643 [2024-11-20 06:23:14.267082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.643 [2024-11-20 06:23:14.267090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:54.643 [2024-11-20 06:23:14.267096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:54.643 [2024-11-20 06:23:14.267103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:54.643 [2024-11-20 06:23:14.267109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:54.643 [2024-11-20 06:23:14.267116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:54.643 [2024-11-20 06:23:14.267122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:54.643 [2024-11-20 06:23:14.267130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:54.643 [2024-11-20 06:23:14.267139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:54.643 [2024-11-20 06:23:14.267154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:54.643 [2024-11-20 06:23:14.267174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:54.643 [2024-11-20 06:23:14.267182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:54.643 [2024-11-20 06:23:14.267188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:54.643 [2024-11-20 06:23:14.267195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:54.643 [2024-11-20 06:23:14.267244] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:54.643 [2024-11-20 06:23:14.267252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.643 [2024-11-20 06:23:14.267269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:54.643 [2024-11-20 06:23:14.267276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:54.643 [2024-11-20 06:23:14.267283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:54.643 [2024-11-20 06:23:14.267291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.643 [2024-11-20 06:23:14.267297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:54.643 [2024-11-20 06:23:14.267304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:24:54.643 [2024-11-20 06:23:14.267311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.290707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.290737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:54.902 [2024-11-20 06:23:14.290747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.336 ms 00:24:54.902 [2024-11-20 06:23:14.290754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.290789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.290797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:54.902 [2024-11-20 06:23:14.290804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:54.902 [2024-11-20 06:23:14.290812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.320563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.320593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:54.902 [2024-11-20 06:23:14.320602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.703 ms 00:24:54.902 [2024-11-20 06:23:14.320610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.320634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.320642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:54.902 [2024-11-20 06:23:14.320650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:54.902 [2024-11-20 06:23:14.320657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.320744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.320754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:54.902 [2024-11-20 06:23:14.320763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:24:54.902 [2024-11-20 06:23:14.320770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.320806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.320813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:54.902 [2024-11-20 06:23:14.320821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:24:54.902 [2024-11-20 06:23:14.320828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.334522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.334550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:54.902 [2024-11-20 06:23:14.334559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.675 ms 00:24:54.902 [2024-11-20 06:23:14.334566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.334668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.334678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:24:54.902 [2024-11-20 06:23:14.334686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:54.902 [2024-11-20 06:23:14.334694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.362989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.363035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:24:54.902 [2024-11-20 06:23:14.363050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.278 ms 00:24:54.902 [2024-11-20 06:23:14.363060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.373266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.373386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:54.902 [2024-11-20 06:23:14.373408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:24:54.902 [2024-11-20 06:23:14.373416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.902 [2024-11-20 06:23:14.426979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.902 [2024-11-20 06:23:14.427031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:54.902 [2024-11-20 06:23:14.427048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.511 ms 00:24:54.902 [2024-11-20 06:23:14.427056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.903 [2024-11-20 06:23:14.427183] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:24:54.903 [2024-11-20 06:23:14.427272] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:24:54.903 [2024-11-20 06:23:14.427357] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:24:54.903 [2024-11-20 06:23:14.427441] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:24:54.903 [2024-11-20 06:23:14.427449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.903 [2024-11-20 06:23:14.427457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:24:54.903 [2024-11-20 06:23:14.427465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:24:54.903 [2024-11-20 06:23:14.427472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.903 [2024-11-20 06:23:14.427550] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:24:54.903 [2024-11-20 06:23:14.427589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.903 [2024-11-20 06:23:14.427600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:24:54.903 [2024-11-20 06:23:14.427608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:24:54.903 [2024-11-20 06:23:14.427616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.903 [2024-11-20 06:23:14.441855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.903 [2024-11-20 06:23:14.441890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:24:54.903 [2024-11-20 06:23:14.441901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.218 ms 00:24:54.903 [2024-11-20 06:23:14.441909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.903 [2024-11-20 06:23:14.450262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.903 [2024-11-20 06:23:14.450291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:24:54.903 [2024-11-20 06:23:14.450301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:54.903 [2024-11-20 06:23:14.450309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.903 [2024-11-20 06:23:14.450404] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:24:54.903 [2024-11-20 06:23:14.450557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.903 [2024-11-20 06:23:14.450570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:54.903 [2024-11-20 06:23:14.450578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.154 ms 00:24:54.903 [2024-11-20 06:23:14.450585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.469 [2024-11-20 06:23:14.885919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.469 [2024-11-20 06:23:14.885986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:55.469 [2024-11-20 06:23:14.886001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 434.561 ms 00:24:55.469 [2024-11-20 06:23:14.886010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.469 [2024-11-20 06:23:14.889832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.469 [2024-11-20 06:23:14.889992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:55.469 [2024-11-20 06:23:14.890010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:24:55.469 [2024-11-20 06:23:14.890018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.469 [2024-11-20 06:23:14.890373] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:24:55.469 [2024-11-20 06:23:14.890402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.469 [2024-11-20 06:23:14.890410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:55.469 [2024-11-20 06:23:14.890420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:24:55.469 [2024-11-20 06:23:14.890427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.469 [2024-11-20 06:23:14.890456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.469 [2024-11-20 06:23:14.890465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:55.469 [2024-11-20 06:23:14.890473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:55.469 [2024-11-20 06:23:14.890481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.469 [2024-11-20 06:23:14.890536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 440.130 ms, result 0 00:24:55.469 [2024-11-20 06:23:14.890575] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:24:55.469 [2024-11-20 06:23:14.890667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.469 [2024-11-20 06:23:14.890677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:55.469 [2024-11-20 06:23:14.890685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:24:55.469 [2024-11-20 06:23:14.890692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.312638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.312818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:55.727 [2024-11-20 06:23:15.312842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 420.995 ms 00:24:55.727 [2024-11-20 06:23:15.312850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.316512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.316545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:55.727 [2024-11-20 06:23:15.316555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.736 ms 00:24:55.727 [2024-11-20 06:23:15.316563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.316850] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:24:55.727 [2024-11-20 06:23:15.316874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.316882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:55.727 [2024-11-20 06:23:15.316890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:24:55.727 [2024-11-20 06:23:15.316898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.317265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.317300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:55.727 [2024-11-20 06:23:15.317311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:55.727 [2024-11-20 06:23:15.317319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.317364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 426.785 ms, result 0 00:24:55.727 [2024-11-20 06:23:15.317406] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:55.727 [2024-11-20 06:23:15.317416] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:55.727 [2024-11-20 06:23:15.317425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.317433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:24:55.727 [2024-11-20 06:23:15.317441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 867.037 ms 00:24:55.727 [2024-11-20 06:23:15.317449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.317478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.317487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:24:55.727 [2024-11-20 06:23:15.317521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:55.727 [2024-11-20 06:23:15.317528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.727 [2024-11-20 06:23:15.328226] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:55.727 [2024-11-20 06:23:15.328436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.727 [2024-11-20 06:23:15.328450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:55.728 [2024-11-20 06:23:15.328460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.891 ms 00:24:55.728 [2024-11-20 06:23:15.328468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.329162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.329181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:24:55.728 [2024-11-20 06:23:15.329192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:24:55.728 [2024-11-20 06:23:15.329200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:24:55.728 [2024-11-20 06:23:15.331561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:24:55.728 [2024-11-20 06:23:15.331569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:24:55.728 [2024-11-20 06:23:15.331630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:55.728 [2024-11-20 06:23:15.331639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:55.728 [2024-11-20 06:23:15.331755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:24:55.728 [2024-11-20 06:23:15.331762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:55.728 [2024-11-20 06:23:15.331797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:55.728 [2024-11-20 06:23:15.331804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:55.728 [2024-11-20 06:23:15.331839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:55.728 [2024-11-20 06:23:15.331853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:55.728 [2024-11-20 06:23:15.331860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.331910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.728 [2024-11-20 06:23:15.331918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:55.728 [2024-11-20 06:23:15.331926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:24:55.728 [2024-11-20 06:23:15.331933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.728 [2024-11-20 06:23:15.332779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1095.581 ms, result 0 00:24:55.728 [2024-11-20 06:23:15.345138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.987 [2024-11-20 06:23:15.361129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:55.987 [2024-11-20 06:23:15.369246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:56.245 06:23:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.245 06:23:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:56.245 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:56.245 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:56.246 Validate MD5 checksum, iteration 1 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:56.246 06:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:56.505 [2024-11-20 06:23:15.885091] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:24:56.505 [2024-11-20 06:23:15.885208] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79203 ] 00:24:56.505 [2024-11-20 06:23:16.042578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.762 [2024-11-20 06:23:16.139822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.137  [2024-11-20T06:23:18.336Z] Copying: 680/1024 [MB] (680 MBps) [2024-11-20T06:23:19.268Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:24:59.635 00:24:59.635 06:23:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:59.635 06:23:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:02.162 Validate MD5 checksum, iteration 2 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9a706539f4a4ca6de3a0bce60d848e9c 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9a706539f4a4ca6de3a0bce60d848e9c != \9\a\7\0\6\5\3\9\f\4\a\4\c\a\6\d\e\3\a\0\b\c\e\6\0\d\8\4\8\e\9\c ]] 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:02.162 06:23:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:02.162 [2024-11-20 06:23:21.470954] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:02.162 [2024-11-20 06:23:21.471065] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79264 ] 00:25:02.162 [2024-11-20 06:23:21.631769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.162 [2024-11-20 06:23:21.727419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.060  [2024-11-20T06:23:23.951Z] Copying: 683/1024 [MB] (683 MBps) [2024-11-20T06:23:24.609Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:25:04.976 00:25:04.976 06:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:04.976 06:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e7524cb999f4d10186c171f4e3aeba56 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e7524cb999f4d10186c171f4e3aeba56 != \e\7\5\2\4\c\b\9\9\9\f\4\d\1\0\1\8\6\c\1\7\1\f\4\e\3\a\e\b\a\5\6 ]] 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79168 ]] 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79168 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79168 ']' 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79168 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79168 00:25:07.527 killing process with pid 79168 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79168' 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79168 00:25:07.527 06:23:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79168 00:25:07.785 [2024-11-20 06:23:27.399692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:07.785 [2024-11-20 06:23:27.410781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.785 [2024-11-20 06:23:27.410818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:07.785 [2024-11-20 06:23:27.410828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:07.785 [2024-11-20 06:23:27.410836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.785 [2024-11-20 06:23:27.410853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:07.785 [2024-11-20 06:23:27.413033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.785 [2024-11-20 06:23:27.413058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:07.785 [2024-11-20 06:23:27.413069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.169 ms 00:25:07.785 [2024-11-20 06:23:27.413076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.785 [2024-11-20 06:23:27.413256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.785 [2024-11-20 06:23:27.413265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:07.785 [2024-11-20 06:23:27.413272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:25:07.785 [2024-11-20 06:23:27.413278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.785 [2024-11-20 06:23:27.414371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.785 [2024-11-20 06:23:27.414503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:07.785 [2024-11-20 06:23:27.414517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.082 ms 00:25:07.785 [2024-11-20 06:23:27.414523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.785 [2024-11-20 06:23:27.415457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.785 [2024-11-20 06:23:27.415471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:07.785 [2024-11-20 06:23:27.415479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.904 ms 00:25:07.785 [2024-11-20 06:23:27.415486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.422952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.422978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:08.042 [2024-11-20 06:23:27.422986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.430 ms 00:25:08.042 [2024-11-20 06:23:27.422996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.426889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.426914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:08.042 [2024-11-20 06:23:27.426929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.866 ms 00:25:08.042 [2024-11-20 06:23:27.426935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.427002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.427010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:08.042 [2024-11-20 06:23:27.427017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:25:08.042 [2024-11-20 06:23:27.427023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.434090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.434200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:08.042 [2024-11-20 06:23:27.434212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.051 ms 00:25:08.042 [2024-11-20 06:23:27.434218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.441208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.441298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:08.042 [2024-11-20 06:23:27.441309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.967 ms 00:25:08.042 [2024-11-20 06:23:27.441314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.448162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.448251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:08.042 [2024-11-20 06:23:27.448262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.825 ms 00:25:08.042 [2024-11-20 06:23:27.448267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.455229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.455319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:08.042 [2024-11-20 06:23:27.455367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.920 ms 00:25:08.042 [2024-11-20 06:23:27.455385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.455416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:08.042 [2024-11-20 06:23:27.455484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:08.042 [2024-11-20 06:23:27.455507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:08.042 [2024-11-20 06:23:27.455513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:08.042 [2024-11-20 06:23:27.455520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:08.042 [2024-11-20 06:23:27.455616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:08.042 [2024-11-20 06:23:27.455621] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b6856bd1-e664-4b91-859a-7040bb4e7a02 00:25:08.042 [2024-11-20 06:23:27.455627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:08.042 [2024-11-20 06:23:27.455633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:08.042 [2024-11-20 06:23:27.455639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:08.042 [2024-11-20 06:23:27.455645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:08.042 [2024-11-20 06:23:27.455650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:08.042 [2024-11-20 06:23:27.455657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:08.042 [2024-11-20 06:23:27.455663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:08.042 [2024-11-20 06:23:27.455668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:08.042 [2024-11-20 06:23:27.455673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:08.042 [2024-11-20 06:23:27.455678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.455688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:08.042 [2024-11-20 06:23:27.455694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:25:08.042 [2024-11-20 06:23:27.455701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.465480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.465514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:08.042 [2024-11-20 06:23:27.465523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.765 ms 00:25:08.042 [2024-11-20 06:23:27.465545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.465828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.042 [2024-11-20 06:23:27.465840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:08.042 [2024-11-20 06:23:27.465847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:25:08.042 [2024-11-20 06:23:27.465853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.499373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.042 [2024-11-20 06:23:27.499399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:08.042 [2024-11-20 06:23:27.499408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.042 [2024-11-20 06:23:27.499415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.499443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.042 [2024-11-20 06:23:27.499449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:08.042 [2024-11-20 06:23:27.499455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.042 [2024-11-20 06:23:27.499461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.499539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.042 [2024-11-20 06:23:27.499548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:08.042 [2024-11-20 06:23:27.499555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.042 [2024-11-20 06:23:27.499561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.042 [2024-11-20 06:23:27.499574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.042 [2024-11-20 06:23:27.499583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:08.042 [2024-11-20 06:23:27.499590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.042 [2024-11-20 06:23:27.499595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.561123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.561258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:08.043 [2024-11-20 06:23:27.561273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.561279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.610836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.610871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:08.043 [2024-11-20 06:23:27.610879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.610885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.610954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.610963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:08.043 [2024-11-20 06:23:27.610969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.610976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.611028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:08.043 [2024-11-20 06:23:27.611039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.611051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.611129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:08.043 [2024-11-20 06:23:27.611135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.611141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.611173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:08.043 [2024-11-20 06:23:27.611179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.611188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.611223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:08.043 [2024-11-20 06:23:27.611228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.611234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:08.043 [2024-11-20 06:23:27.611276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:08.043 [2024-11-20 06:23:27.611284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:08.043 [2024-11-20 06:23:27.611290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.043 [2024-11-20 06:23:27.611379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 200.576 ms, result 0 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:09.000 Remove shared memory files 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78938 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:09.000 ************************************ 00:25:09.000 END TEST ftl_upgrade_shutdown 00:25:09.000 ************************************ 00:25:09.000 00:25:09.000 real 1m22.700s 00:25:09.000 user 1m54.610s 00:25:09.000 sys 0m18.248s 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:09.000 06:23:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@14 -- # killprocess 72410 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@952 -- # '[' -z 72410 ']' 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@956 -- # kill -0 72410 00:25:09.000 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72410) - No such process 00:25:09.000 Process with pid 72410 is not found 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72410 is not found' 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79372 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@20 -- # waitforlisten 79372 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@833 -- # '[' -z 79372 ']' 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:09.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.000 06:23:28 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:09.000 06:23:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:09.000 [2024-11-20 06:23:28.390634] Starting SPDK v25.01-pre git sha1 9b64b1304 / DPDK 24.03.0 initialization... 00:25:09.000 [2024-11-20 06:23:28.390933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79372 ] 00:25:09.000 [2024-11-20 06:23:28.552428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.273 [2024-11-20 06:23:28.669284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.840 06:23:29 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.840 06:23:29 ftl -- common/autotest_common.sh@866 -- # return 0 00:25:09.840 06:23:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:10.097 nvme0n1 00:25:10.097 06:23:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:25:10.097 06:23:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:10.097 06:23:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:10.354 06:23:29 ftl -- ftl/common.sh@28 -- # stores=f0b31910-ac89-407f-8282-6400b663eef2 00:25:10.354 06:23:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:25:10.354 06:23:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0b31910-ac89-407f-8282-6400b663eef2 00:25:10.612 06:23:30 ftl -- ftl/ftl.sh@23 -- # killprocess 79372 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@952 -- # '[' -z 79372 ']' 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@956 -- # kill -0 79372 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@957 -- # uname 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79372 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.612 killing process with pid 79372 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79372' 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@971 -- # kill 79372 00:25:10.612 06:23:30 ftl -- common/autotest_common.sh@976 -- # wait 79372 00:25:11.984 06:23:31 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:11.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:11.984 Waiting for block devices as requested 00:25:11.984 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.984 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.984 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:12.242 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.503 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:17.503 Remove shared memory files 00:25:17.503 06:23:36 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:25:17.503 06:23:36 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:17.503 06:23:36 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:25:17.503 06:23:36 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:25:17.503 06:23:36 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:25:17.503 06:23:36 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:17.503 06:23:36 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:25:17.503 ************************************ 00:25:17.503 END TEST ftl 00:25:17.503 ************************************ 00:25:17.503 00:25:17.503 real 10m4.530s 00:25:17.503 user 13m9.617s 00:25:17.503 sys 1m14.548s 00:25:17.503 06:23:36 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:17.503 06:23:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:17.503 06:23:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:17.503 06:23:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:17.503 06:23:36 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:17.503 06:23:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:17.503 06:23:36 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:17.503 06:23:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:17.503 06:23:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:17.503 06:23:36 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:17.503 06:23:36 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:17.503 06:23:36 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:17.503 06:23:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:17.503 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:25:17.503 06:23:36 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:17.503 06:23:36 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:25:17.503 06:23:36 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:25:17.503 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:25:18.436 INFO: APP EXITING 00:25:18.436 INFO: killing all VMs 00:25:18.436 INFO: killing vhost app 00:25:18.436 INFO: EXIT DONE 00:25:18.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.999 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:18.999 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:18.999 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:18.999 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:19.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.514 Cleaning 00:25:19.514 Removing: /var/run/dpdk/spdk0/config 00:25:19.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:19.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:19.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:19.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:19.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:19.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:19.514 Removing: /var/run/dpdk/spdk0 00:25:19.514 Removing: /var/run/dpdk/spdk_pid56974 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57182 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57394 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57493 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57527 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57649 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57667 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57861 00:25:19.514 Removing: /var/run/dpdk/spdk_pid57954 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58049 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58155 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58247 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58292 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58323 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58399 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58483 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58919 00:25:19.514 Removing: /var/run/dpdk/spdk_pid58978 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59035 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59051 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59159 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59174 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59271 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59287 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59346 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59364 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59417 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59435 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59595 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59631 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59715 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59887 00:25:19.514 Removing: /var/run/dpdk/spdk_pid59971 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60013 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60445 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60543 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60657 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60710 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60730 00:25:19.514 Removing: /var/run/dpdk/spdk_pid60814 00:25:19.514 Removing: /var/run/dpdk/spdk_pid61433 00:25:19.514 Removing: /var/run/dpdk/spdk_pid61469 00:25:19.514 Removing: /var/run/dpdk/spdk_pid61945 00:25:19.514 Removing: /var/run/dpdk/spdk_pid62043 00:25:19.514 Removing: /var/run/dpdk/spdk_pid62152 00:25:19.514 Removing: /var/run/dpdk/spdk_pid62206 00:25:19.514 Removing: /var/run/dpdk/spdk_pid62236 00:25:19.514 Removing: /var/run/dpdk/spdk_pid62257 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64106 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64243 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64247 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64265 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64305 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64309 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64321 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64366 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64370 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64382 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64428 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64432 00:25:19.514 Removing: /var/run/dpdk/spdk_pid64444 00:25:19.514 Removing: /var/run/dpdk/spdk_pid65821 00:25:19.514 Removing: /var/run/dpdk/spdk_pid65929 00:25:19.514 Removing: /var/run/dpdk/spdk_pid67334 00:25:19.514 Removing: /var/run/dpdk/spdk_pid68718 00:25:19.514 Removing: /var/run/dpdk/spdk_pid68805 00:25:19.514 Removing: /var/run/dpdk/spdk_pid68887 00:25:19.514 Removing: /var/run/dpdk/spdk_pid68965 00:25:19.514 Removing: /var/run/dpdk/spdk_pid69070 00:25:19.514 Removing: /var/run/dpdk/spdk_pid69144 00:25:19.514 Removing: /var/run/dpdk/spdk_pid69286 00:25:19.514 Removing: /var/run/dpdk/spdk_pid69650 00:25:19.514 Removing: /var/run/dpdk/spdk_pid69691 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70150 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70332 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70437 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70546 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70595 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70619 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70932 00:25:19.514 Removing: /var/run/dpdk/spdk_pid70993 00:25:19.514 Removing: /var/run/dpdk/spdk_pid71066 00:25:19.514 Removing: /var/run/dpdk/spdk_pid71455 00:25:19.514 Removing: /var/run/dpdk/spdk_pid71602 00:25:19.514 Removing: /var/run/dpdk/spdk_pid72410 00:25:19.776 Removing: /var/run/dpdk/spdk_pid72542 00:25:19.776 Removing: /var/run/dpdk/spdk_pid72711 00:25:19.776 Removing: /var/run/dpdk/spdk_pid72798 00:25:19.776 Removing: /var/run/dpdk/spdk_pid73125 00:25:19.776 Removing: /var/run/dpdk/spdk_pid73367 00:25:19.776 Removing: /var/run/dpdk/spdk_pid73708 00:25:19.776 Removing: /var/run/dpdk/spdk_pid73890 00:25:19.776 Removing: /var/run/dpdk/spdk_pid73987 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74034 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74144 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74175 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74233 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74417 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74640 00:25:19.776 Removing: /var/run/dpdk/spdk_pid74903 00:25:19.776 Removing: /var/run/dpdk/spdk_pid75195 00:25:19.776 Removing: /var/run/dpdk/spdk_pid75469 00:25:19.776 Removing: /var/run/dpdk/spdk_pid76172 00:25:19.776 Removing: /var/run/dpdk/spdk_pid76309 00:25:19.776 Removing: /var/run/dpdk/spdk_pid76407 00:25:19.776 Removing: /var/run/dpdk/spdk_pid77284 00:25:19.776 Removing: /var/run/dpdk/spdk_pid77360 00:25:19.776 Removing: /var/run/dpdk/spdk_pid77725 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78022 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78384 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78507 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78553 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78608 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78664 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78722 00:25:19.776 Removing: /var/run/dpdk/spdk_pid78938 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79012 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79078 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79168 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79203 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79264 00:25:19.776 Removing: /var/run/dpdk/spdk_pid79372 00:25:19.776 Clean 00:25:19.776 06:23:39 -- common/autotest_common.sh@1451 -- # return 0 00:25:19.776 06:23:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:19.776 06:23:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.776 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:25:19.776 06:23:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:19.776 06:23:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.776 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:25:19.776 06:23:39 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:19.776 06:23:39 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:19.776 06:23:39 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:19.776 06:23:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:19.776 06:23:39 -- spdk/autotest.sh@394 -- # hostname 00:25:19.776 06:23:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:20.033 geninfo: WARNING: invalid characters removed from testname! 00:25:41.985 06:24:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:45.295 06:24:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:47.190 06:24:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.718 06:24:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:51.641 06:24:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.166 06:24:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.121 06:24:15 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:56.121 06:24:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:56.121 06:24:15 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:56.121 06:24:15 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:56.121 06:24:15 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:56.121 06:24:15 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:56.121 + [[ -n 5027 ]] 00:25:56.121 + sudo kill 5027 00:25:56.129 [Pipeline] } 00:25:56.144 [Pipeline] // timeout 00:25:56.149 [Pipeline] } 00:25:56.163 [Pipeline] // stage 00:25:56.169 [Pipeline] } 00:25:56.183 [Pipeline] // catchError 00:25:56.192 [Pipeline] stage 00:25:56.194 [Pipeline] { (Stop VM) 00:25:56.206 [Pipeline] sh 00:25:56.481 + vagrant halt 00:25:59.003 ==> default: Halting domain... 00:26:03.276 [Pipeline] sh 00:26:03.566 + vagrant destroy -f 00:26:06.106 ==> default: Removing domain... 00:26:06.690 [Pipeline] sh 00:26:06.977 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:26:06.989 [Pipeline] } 00:26:07.003 [Pipeline] // stage 00:26:07.009 [Pipeline] } 00:26:07.024 [Pipeline] // dir 00:26:07.030 [Pipeline] } 00:26:07.045 [Pipeline] // wrap 00:26:07.051 [Pipeline] } 00:26:07.064 [Pipeline] // catchError 00:26:07.072 [Pipeline] stage 00:26:07.075 [Pipeline] { (Epilogue) 00:26:07.088 [Pipeline] sh 00:26:07.443 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:14.170 [Pipeline] catchError 00:26:14.171 [Pipeline] { 00:26:14.183 [Pipeline] sh 00:26:14.469 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:17.871 Artifacts sizes are good 00:26:17.882 [Pipeline] } 00:26:17.894 [Pipeline] // catchError 00:26:17.904 [Pipeline] archiveArtifacts 00:26:17.910 Archiving artifacts 00:26:18.001 [Pipeline] cleanWs 00:26:18.014 [WS-CLEANUP] Deleting project workspace... 00:26:18.014 [WS-CLEANUP] Deferred wipeout is used... 00:26:18.022 [WS-CLEANUP] done 00:26:18.024 [Pipeline] } 00:26:18.042 [Pipeline] // stage 00:26:18.047 [Pipeline] } 00:26:18.062 [Pipeline] // node 00:26:18.067 [Pipeline] End of Pipeline 00:26:18.108 Finished: SUCCESS