00:00:00.001 Started by upstream project "autotest-per-patch" build number 132398 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.474 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.488 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.500 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.500 > git config core.sparsecheckout # timeout=10 00:00:06.514 > git read-tree -mu HEAD # timeout=10 00:00:06.532 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.555 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.555 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.661 [Pipeline] Start of Pipeline 00:00:06.677 [Pipeline] library 00:00:06.679 Loading library shm_lib@master 00:00:06.680 Library shm_lib@master is cached. Copying from home. 00:00:06.698 [Pipeline] node 00:00:06.708 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.709 [Pipeline] { 00:00:06.718 [Pipeline] catchError 00:00:06.720 [Pipeline] { 00:00:06.732 [Pipeline] wrap 00:00:06.741 [Pipeline] { 00:00:06.749 [Pipeline] stage 00:00:06.751 [Pipeline] { (Prologue) 00:00:06.766 [Pipeline] echo 00:00:06.767 Node: VM-host-SM9 00:00:06.772 [Pipeline] cleanWs 00:00:06.780 [WS-CLEANUP] Deleting project workspace... 00:00:06.780 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.785 [WS-CLEANUP] done 00:00:06.958 [Pipeline] setCustomBuildProperty 00:00:07.043 [Pipeline] httpRequest 00:00:07.786 [Pipeline] echo 00:00:07.787 Sorcerer 10.211.164.20 is alive 00:00:07.795 [Pipeline] retry 00:00:07.797 [Pipeline] { 00:00:07.807 [Pipeline] httpRequest 00:00:07.811 HttpMethod: GET 00:00:07.812 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.812 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.818 Response Code: HTTP/1.1 200 OK 00:00:07.819 Success: Status code 200 is in the accepted range: 200,404 00:00:07.819 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.949 [Pipeline] } 00:00:15.960 [Pipeline] // retry 00:00:15.967 [Pipeline] sh 00:00:16.241 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.256 [Pipeline] httpRequest 00:00:16.647 [Pipeline] echo 00:00:16.649 Sorcerer 10.211.164.20 is alive 00:00:16.658 [Pipeline] retry 00:00:16.660 [Pipeline] { 00:00:16.674 [Pipeline] httpRequest 00:00:16.678 HttpMethod: GET 00:00:16.678 URL: http://10.211.164.20/packages/spdk_b6a8866f37656da6e240efc9cdcfe6489eeb9cc5.tar.gz 00:00:16.679 Sending request to url: http://10.211.164.20/packages/spdk_b6a8866f37656da6e240efc9cdcfe6489eeb9cc5.tar.gz 00:00:16.684 Response Code: HTTP/1.1 200 OK 00:00:16.685 Success: Status code 200 is in the accepted range: 200,404 00:00:16.685 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b6a8866f37656da6e240efc9cdcfe6489eeb9cc5.tar.gz 00:06:01.213 [Pipeline] } 00:06:01.234 [Pipeline] // retry 00:06:01.243 [Pipeline] sh 00:06:01.521 + tar --no-same-owner -xf spdk_b6a8866f37656da6e240efc9cdcfe6489eeb9cc5.tar.gz 00:06:04.835 [Pipeline] sh 00:06:05.119 + git -C spdk log --oneline -n5 00:06:05.119 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:06:05.119 3bdf5e874 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:06:05.119 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:06:05.119 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:06:05.119 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:06:05.136 [Pipeline] writeFile 00:06:05.150 [Pipeline] sh 00:06:05.430 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:05.442 [Pipeline] sh 00:06:05.726 + cat autorun-spdk.conf 00:06:05.726 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:05.726 SPDK_TEST_NVME=1 00:06:05.726 SPDK_TEST_FTL=1 00:06:05.726 SPDK_TEST_ISAL=1 00:06:05.726 SPDK_RUN_ASAN=1 00:06:05.726 SPDK_RUN_UBSAN=1 00:06:05.726 SPDK_TEST_XNVME=1 00:06:05.726 SPDK_TEST_NVME_FDP=1 00:06:05.726 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:05.733 RUN_NIGHTLY=0 00:06:05.735 [Pipeline] } 00:06:05.748 [Pipeline] // stage 00:06:05.764 [Pipeline] stage 00:06:05.766 [Pipeline] { (Run VM) 00:06:05.780 [Pipeline] sh 00:06:06.061 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:06.061 + echo 'Start stage prepare_nvme.sh' 00:06:06.061 Start stage prepare_nvme.sh 00:06:06.061 + [[ -n 5 ]] 00:06:06.061 + disk_prefix=ex5 00:06:06.061 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:06:06.061 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:06:06.061 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:06:06.061 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:06.061 ++ SPDK_TEST_NVME=1 00:06:06.061 ++ SPDK_TEST_FTL=1 00:06:06.061 ++ SPDK_TEST_ISAL=1 00:06:06.061 ++ SPDK_RUN_ASAN=1 00:06:06.061 ++ SPDK_RUN_UBSAN=1 00:06:06.061 ++ SPDK_TEST_XNVME=1 00:06:06.061 ++ SPDK_TEST_NVME_FDP=1 00:06:06.061 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:06.061 ++ RUN_NIGHTLY=0 00:06:06.061 + cd /var/jenkins/workspace/nvme-vg-autotest 00:06:06.061 + nvme_files=() 00:06:06.061 + declare -A nvme_files 00:06:06.061 + backend_dir=/var/lib/libvirt/images/backends 00:06:06.061 + nvme_files['nvme.img']=5G 00:06:06.061 + nvme_files['nvme-cmb.img']=5G 00:06:06.061 + nvme_files['nvme-multi0.img']=4G 00:06:06.061 + nvme_files['nvme-multi1.img']=4G 00:06:06.061 + nvme_files['nvme-multi2.img']=4G 00:06:06.061 + nvme_files['nvme-openstack.img']=8G 00:06:06.061 + nvme_files['nvme-zns.img']=5G 00:06:06.061 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:06.061 + (( SPDK_TEST_FTL == 1 )) 00:06:06.061 + nvme_files["nvme-ftl.img"]=6G 00:06:06.061 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:06.061 + nvme_files["nvme-fdp.img"]=1G 00:06:06.061 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:06.062 + for nvme in "${!nvme_files[@]}" 00:06:06.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:06:06.062 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:06.062 + for nvme in "${!nvme_files[@]}" 00:06:06.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:06:06.062 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:06:06.062 + for nvme in "${!nvme_files[@]}" 00:06:06.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:06:06.062 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:06.062 + for nvme in "${!nvme_files[@]}" 00:06:06.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:06:06.062 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:06.062 + for nvme in "${!nvme_files[@]}" 00:06:06.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:06:06.321 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:06.321 + for nvme in "${!nvme_files[@]}" 00:06:06.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:06:06.321 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:06.321 + for nvme in "${!nvme_files[@]}" 00:06:06.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:06:06.321 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:06.321 + for nvme in "${!nvme_files[@]}" 00:06:06.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:06:06.321 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:06:06.321 + for nvme in "${!nvme_files[@]}" 00:06:06.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:06:06.580 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:06.580 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:06:06.580 + echo 'End stage prepare_nvme.sh' 00:06:06.580 End stage prepare_nvme.sh 00:06:06.591 [Pipeline] sh 00:06:06.871 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:06.871 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:06:06.871 00:06:06.871 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:06:06.871 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:06:06.871 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:06:06.871 HELP=0 00:06:06.871 DRY_RUN=0 00:06:06.871 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:06:06.871 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:06:06.871 NVME_AUTO_CREATE=0 00:06:06.871 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:06:06.871 NVME_CMB=,,,, 00:06:06.871 NVME_PMR=,,,, 00:06:06.871 NVME_ZNS=,,,, 00:06:06.871 NVME_MS=true,,,, 00:06:06.871 NVME_FDP=,,,on, 00:06:06.871 SPDK_VAGRANT_DISTRO=fedora39 00:06:06.871 SPDK_VAGRANT_VMCPU=10 00:06:06.871 SPDK_VAGRANT_VMRAM=12288 00:06:06.871 SPDK_VAGRANT_PROVIDER=libvirt 00:06:06.871 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:06.871 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:06.871 SPDK_OPENSTACK_NETWORK=0 00:06:06.871 VAGRANT_PACKAGE_BOX=0 00:06:06.871 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:06:06.871 FORCE_DISTRO=true 00:06:06.871 VAGRANT_BOX_VERSION= 00:06:06.871 EXTRA_VAGRANTFILES= 00:06:06.871 NIC_MODEL=e1000 00:06:06.871 00:06:06.871 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:06:06.871 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:06:11.086 Bringing machine 'default' up with 'libvirt' provider... 00:06:11.086 ==> default: Creating image (snapshot of base box volume). 00:06:11.344 ==> default: Creating domain with the following settings... 00:06:11.344 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109162_57c2ceabf35078ab1015 00:06:11.344 ==> default: -- Domain type: kvm 00:06:11.344 ==> default: -- Cpus: 10 00:06:11.344 ==> default: -- Feature: acpi 00:06:11.344 ==> default: -- Feature: apic 00:06:11.344 ==> default: -- Feature: pae 00:06:11.344 ==> default: -- Memory: 12288M 00:06:11.344 ==> default: -- Memory Backing: hugepages: 00:06:11.344 ==> default: -- Management MAC: 00:06:11.344 ==> default: -- Loader: 00:06:11.344 ==> default: -- Nvram: 00:06:11.344 ==> default: -- Base box: spdk/fedora39 00:06:11.344 ==> default: -- Storage pool: default 00:06:11.344 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109162_57c2ceabf35078ab1015.img (20G) 00:06:11.344 ==> default: -- Volume Cache: default 00:06:11.344 ==> default: -- Kernel: 00:06:11.344 ==> default: -- Initrd: 00:06:11.344 ==> default: -- Graphics Type: vnc 00:06:11.344 ==> default: -- Graphics Port: -1 00:06:11.344 ==> default: -- Graphics IP: 127.0.0.1 00:06:11.344 ==> default: -- Graphics Password: Not defined 00:06:11.344 ==> default: -- Video Type: cirrus 00:06:11.344 ==> default: -- Video VRAM: 9216 00:06:11.344 ==> default: -- Sound Type: 00:06:11.344 ==> default: -- Keymap: en-us 00:06:11.344 ==> default: -- TPM Path: 00:06:11.344 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:11.344 ==> default: -- Command line args: 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:06:11.344 ==> default: -> value=-drive, 00:06:11.344 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:06:11.344 ==> default: -> value=-device, 00:06:11.344 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:11.344 ==> default: Creating shared folders metadata... 00:06:11.344 ==> default: Starting domain. 00:06:13.279 ==> default: Waiting for domain to get an IP address... 00:06:31.386 ==> default: Waiting for SSH to become available... 00:06:32.430 ==> default: Configuring and enabling network interfaces... 00:06:36.614 default: SSH address: 192.168.121.181:22 00:06:36.614 default: SSH username: vagrant 00:06:36.614 default: SSH auth method: private key 00:06:38.516 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:46.622 ==> default: Mounting SSHFS shared folder... 00:06:48.032 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:48.032 ==> default: Checking Mount.. 00:06:48.967 ==> default: Folder Successfully Mounted! 00:06:48.967 ==> default: Running provisioner: file... 00:06:49.533 default: ~/.gitconfig => .gitconfig 00:06:50.100 00:06:50.100 SUCCESS! 00:06:50.100 00:06:50.100 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:50.100 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:50.100 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:50.100 00:06:50.109 [Pipeline] } 00:06:50.125 [Pipeline] // stage 00:06:50.135 [Pipeline] dir 00:06:50.135 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:50.137 [Pipeline] { 00:06:50.150 [Pipeline] catchError 00:06:50.152 [Pipeline] { 00:06:50.165 [Pipeline] sh 00:06:50.444 + vagrant+ ssh-config --host vagrant 00:06:50.444 sed -ne /^Host/,$p 00:06:50.444 + tee ssh_conf 00:06:54.630 Host vagrant 00:06:54.630 HostName 192.168.121.181 00:06:54.630 User vagrant 00:06:54.630 Port 22 00:06:54.630 UserKnownHostsFile /dev/null 00:06:54.630 StrictHostKeyChecking no 00:06:54.630 PasswordAuthentication no 00:06:54.630 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:54.630 IdentitiesOnly yes 00:06:54.630 LogLevel FATAL 00:06:54.630 ForwardAgent yes 00:06:54.630 ForwardX11 yes 00:06:54.630 00:06:54.644 [Pipeline] withEnv 00:06:54.646 [Pipeline] { 00:06:54.661 [Pipeline] sh 00:06:54.939 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:54.939 source /etc/os-release 00:06:54.939 [[ -e /image.version ]] && img=$(< /image.version) 00:06:54.939 # Minimal, systemd-like check. 00:06:54.939 if [[ -e /.dockerenv ]]; then 00:06:54.939 # Clear garbage from the node's name: 00:06:54.939 # agt-er_autotest_547-896 -> autotest_547-896 00:06:54.939 # $HOSTNAME is the actual container id 00:06:54.939 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:54.939 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:54.939 # We can assume this is a mount from a host where container is running, 00:06:54.939 # so fetch its hostname to easily identify the target swarm worker. 00:06:54.939 container="$(< /etc/hostname) ($agent)" 00:06:54.939 else 00:06:54.939 # Fallback 00:06:54.939 container=$agent 00:06:54.939 fi 00:06:54.939 fi 00:06:54.939 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:54.939 00:06:55.208 [Pipeline] } 00:06:55.225 [Pipeline] // withEnv 00:06:55.234 [Pipeline] setCustomBuildProperty 00:06:55.249 [Pipeline] stage 00:06:55.251 [Pipeline] { (Tests) 00:06:55.268 [Pipeline] sh 00:06:55.546 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:55.559 [Pipeline] sh 00:06:55.836 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:56.117 [Pipeline] timeout 00:06:56.117 Timeout set to expire in 50 min 00:06:56.119 [Pipeline] { 00:06:56.130 [Pipeline] sh 00:06:56.456 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:57.022 HEAD is now at b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:06:57.034 [Pipeline] sh 00:06:57.319 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:57.592 [Pipeline] sh 00:06:57.869 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:58.146 [Pipeline] sh 00:06:58.425 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:58.425 ++ readlink -f spdk_repo 00:06:58.425 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:58.425 + [[ -n /home/vagrant/spdk_repo ]] 00:06:58.425 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:58.425 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:58.425 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:58.425 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:58.425 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:58.425 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:58.425 + cd /home/vagrant/spdk_repo 00:06:58.425 + source /etc/os-release 00:06:58.425 ++ NAME='Fedora Linux' 00:06:58.425 ++ VERSION='39 (Cloud Edition)' 00:06:58.425 ++ ID=fedora 00:06:58.425 ++ VERSION_ID=39 00:06:58.425 ++ VERSION_CODENAME= 00:06:58.425 ++ PLATFORM_ID=platform:f39 00:06:58.425 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:58.425 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:58.425 ++ LOGO=fedora-logo-icon 00:06:58.425 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:58.425 ++ HOME_URL=https://fedoraproject.org/ 00:06:58.425 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:58.425 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:58.425 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:58.684 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:58.684 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:58.684 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:58.684 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:58.684 ++ SUPPORT_END=2024-11-12 00:06:58.684 ++ VARIANT='Cloud Edition' 00:06:58.684 ++ VARIANT_ID=cloud 00:06:58.684 + uname -a 00:06:58.684 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:58.684 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:58.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.201 Hugepages 00:06:59.201 node hugesize free / total 00:06:59.201 node0 1048576kB 0 / 0 00:06:59.201 node0 2048kB 0 / 0 00:06:59.201 00:06:59.201 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:59.201 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:59.201 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:59.201 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:59.201 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:06:59.201 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:59.201 + rm -f /tmp/spdk-ld-path 00:06:59.201 + source autorun-spdk.conf 00:06:59.201 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:59.201 ++ SPDK_TEST_NVME=1 00:06:59.201 ++ SPDK_TEST_FTL=1 00:06:59.201 ++ SPDK_TEST_ISAL=1 00:06:59.201 ++ SPDK_RUN_ASAN=1 00:06:59.201 ++ SPDK_RUN_UBSAN=1 00:06:59.201 ++ SPDK_TEST_XNVME=1 00:06:59.201 ++ SPDK_TEST_NVME_FDP=1 00:06:59.201 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:59.201 ++ RUN_NIGHTLY=0 00:06:59.201 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:59.201 + [[ -n '' ]] 00:06:59.201 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:59.460 + for M in /var/spdk/build-*-manifest.txt 00:06:59.460 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:59.460 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:59.460 + for M in /var/spdk/build-*-manifest.txt 00:06:59.460 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:59.460 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:59.460 + for M in /var/spdk/build-*-manifest.txt 00:06:59.460 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:59.460 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:59.460 ++ uname 00:06:59.460 + [[ Linux == \L\i\n\u\x ]] 00:06:59.460 + sudo dmesg -T 00:06:59.460 + sudo dmesg --clear 00:06:59.460 + dmesg_pid=5294 00:06:59.460 + [[ Fedora Linux == FreeBSD ]] 00:06:59.460 + sudo dmesg -Tw 00:06:59.460 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:59.460 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:59.460 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:59.460 + [[ -x /usr/src/fio-static/fio ]] 00:06:59.460 + export FIO_BIN=/usr/src/fio-static/fio 00:06:59.460 + FIO_BIN=/usr/src/fio-static/fio 00:06:59.460 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:59.460 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:59.460 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:59.460 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:59.460 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:59.460 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:59.460 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:59.460 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:59.460 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:59.460 13:26:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:59.460 13:26:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:59.460 13:26:51 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:59.460 13:26:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:59.460 13:26:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:59.460 13:26:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:59.460 13:26:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.460 13:26:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:59.460 13:26:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:59.460 13:26:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.460 13:26:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.460 13:26:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.460 13:26:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.460 13:26:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.460 13:26:51 -- paths/export.sh@5 -- $ export PATH 00:06:59.460 13:26:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.460 13:26:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:59.460 13:26:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:59.460 13:26:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109211.XXXXXX 00:06:59.460 13:26:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109211.oGwVsw 00:06:59.460 13:26:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:59.460 13:26:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:59.460 13:26:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:59.460 13:26:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:59.460 13:26:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:59.460 13:26:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:59.460 13:26:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:59.460 13:26:51 -- common/autotest_common.sh@10 -- $ set +x 00:06:59.460 13:26:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:59.460 13:26:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:59.460 13:26:51 -- pm/common@17 -- $ local monitor 00:06:59.460 13:26:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:59.461 13:26:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:59.461 13:26:51 -- pm/common@25 -- $ sleep 1 00:06:59.461 13:26:51 -- pm/common@21 -- $ date +%s 00:06:59.461 13:26:51 -- pm/common@21 -- $ date +%s 00:06:59.719 13:26:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109211 00:06:59.719 13:26:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109211 00:06:59.719 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109211_collect-vmstat.pm.log 00:06:59.719 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109211_collect-cpu-load.pm.log 00:07:00.654 13:26:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:00.654 13:26:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:00.654 13:26:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:00.654 13:26:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:00.654 13:26:52 -- spdk/autobuild.sh@16 -- $ date -u 00:07:00.654 Wed Nov 20 01:26:52 PM UTC 2024 00:07:00.654 13:26:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:00.654 v25.01-pre-221-gb6a8866f3 00:07:00.654 13:26:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:07:00.654 13:26:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:07:00.654 13:26:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:00.654 13:26:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:00.654 13:26:52 -- common/autotest_common.sh@10 -- $ set +x 00:07:00.654 ************************************ 00:07:00.654 START TEST asan 00:07:00.654 ************************************ 00:07:00.654 using asan 00:07:00.654 13:26:52 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:07:00.654 00:07:00.654 real 0m0.000s 00:07:00.654 user 0m0.000s 00:07:00.654 sys 0m0.000s 00:07:00.654 13:26:52 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:00.654 ************************************ 00:07:00.654 13:26:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:07:00.654 END TEST asan 00:07:00.654 ************************************ 00:07:00.654 13:26:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:00.654 13:26:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:00.654 13:26:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:00.654 13:26:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:00.654 13:26:52 -- common/autotest_common.sh@10 -- $ set +x 00:07:00.654 ************************************ 00:07:00.654 START TEST ubsan 00:07:00.654 ************************************ 00:07:00.654 using ubsan 00:07:00.654 13:26:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:00.654 00:07:00.654 real 0m0.000s 00:07:00.654 user 0m0.000s 00:07:00.654 sys 0m0.000s 00:07:00.654 13:26:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:00.654 ************************************ 00:07:00.654 END TEST ubsan 00:07:00.654 13:26:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:00.654 ************************************ 00:07:00.654 13:26:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:00.654 13:26:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:00.654 13:26:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:00.654 13:26:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:07:00.654 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:00.654 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:01.220 Using 'verbs' RDMA provider 00:07:14.349 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:26.542 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:26.542 Creating mk/config.mk...done. 00:07:26.542 Creating mk/cc.flags.mk...done. 00:07:26.542 Type 'make' to build. 00:07:26.542 13:27:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:26.542 13:27:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:26.542 13:27:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:26.542 13:27:18 -- common/autotest_common.sh@10 -- $ set +x 00:07:26.542 ************************************ 00:07:26.542 START TEST make 00:07:26.542 ************************************ 00:07:26.542 13:27:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:26.800 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:07:26.800 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:07:26.800 meson setup builddir \ 00:07:26.800 -Dwith-libaio=enabled \ 00:07:26.800 -Dwith-liburing=enabled \ 00:07:26.800 -Dwith-libvfn=disabled \ 00:07:26.800 -Dwith-spdk=disabled \ 00:07:26.800 -Dexamples=false \ 00:07:26.800 -Dtests=false \ 00:07:26.800 -Dtools=false && \ 00:07:26.800 meson compile -C builddir && \ 00:07:26.800 cd -) 00:07:26.800 make[1]: Nothing to be done for 'all'. 00:07:30.982 The Meson build system 00:07:30.982 Version: 1.5.0 00:07:30.982 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:07:30.982 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:30.982 Build type: native build 00:07:30.982 Project name: xnvme 00:07:30.982 Project version: 0.7.5 00:07:30.982 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:30.982 C linker for the host machine: cc ld.bfd 2.40-14 00:07:30.982 Host machine cpu family: x86_64 00:07:30.982 Host machine cpu: x86_64 00:07:30.982 Message: host_machine.system: linux 00:07:30.982 Compiler for C supports arguments -Wno-missing-braces: YES 00:07:30.982 Compiler for C supports arguments -Wno-cast-function-type: YES 00:07:30.982 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:30.982 Run-time dependency threads found: YES 00:07:30.982 Has header "setupapi.h" : NO 00:07:30.982 Has header "linux/blkzoned.h" : YES 00:07:30.982 Has header "linux/blkzoned.h" : YES (cached) 00:07:30.982 Has header "libaio.h" : YES 00:07:30.982 Library aio found: YES 00:07:30.982 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:30.982 Run-time dependency liburing found: YES 2.2 00:07:30.982 Dependency libvfn skipped: feature with-libvfn disabled 00:07:30.982 Found CMake: /usr/bin/cmake (3.27.7) 00:07:30.982 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:07:30.982 Subproject spdk : skipped: feature with-spdk disabled 00:07:30.982 Run-time dependency appleframeworks found: NO (tried framework) 00:07:30.982 Run-time dependency appleframeworks found: NO (tried framework) 00:07:30.982 Library rt found: YES 00:07:30.982 Checking for function "clock_gettime" with dependency -lrt: YES 00:07:30.982 Configuring xnvme_config.h using configuration 00:07:30.982 Configuring xnvme.spec using configuration 00:07:30.982 Run-time dependency bash-completion found: YES 2.11 00:07:30.982 Message: Bash-completions: /usr/share/bash-completion/completions 00:07:30.982 Program cp found: YES (/usr/bin/cp) 00:07:30.982 Build targets in project: 3 00:07:30.982 00:07:30.982 xnvme 0.7.5 00:07:30.982 00:07:30.982 Subprojects 00:07:30.982 spdk : NO Feature 'with-spdk' disabled 00:07:30.982 00:07:30.982 User defined options 00:07:30.982 examples : false 00:07:30.982 tests : false 00:07:30.982 tools : false 00:07:30.982 with-libaio : enabled 00:07:30.982 with-liburing: enabled 00:07:30.982 with-libvfn : disabled 00:07:30.982 with-spdk : disabled 00:07:30.982 00:07:30.982 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:31.240 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:07:31.240 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:07:31.499 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:07:31.499 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:07:31.499 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:07:31.499 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:07:31.499 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:07:31.499 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:07:31.499 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:07:31.499 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:07:31.499 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:07:31.758 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:07:31.758 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:07:31.758 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:07:31.758 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:07:31.758 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:07:31.758 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:07:31.758 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:07:31.758 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:07:31.758 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:07:31.758 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:07:32.016 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:07:32.016 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:07:32.016 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:07:32.016 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:07:32.016 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:07:32.016 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:07:32.016 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:07:32.016 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:07:32.016 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:07:32.016 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:07:32.016 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:07:32.016 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:07:32.016 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:07:32.016 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:07:32.016 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:07:32.016 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:07:32.016 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:07:32.274 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:07:32.274 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:07:32.274 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:07:32.274 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:07:32.274 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:07:32.274 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:07:32.274 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:07:32.274 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:07:32.274 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:07:32.274 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:07:32.274 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:07:32.274 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:07:32.274 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:07:32.274 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:07:32.274 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:07:32.274 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:07:32.274 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:07:32.534 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:07:32.534 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:07:32.534 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:07:32.534 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:07:32.534 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:07:32.534 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:07:32.534 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:07:32.534 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:07:32.534 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:07:32.534 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:07:32.534 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:07:32.534 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:07:32.792 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:07:32.792 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:07:32.792 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:07:32.792 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:07:32.792 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:07:33.050 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:07:33.050 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:07:33.985 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:07:33.985 [75/76] Linking static target lib/libxnvme.a 00:07:33.985 [76/76] Linking target lib/libxnvme.so.0.7.5 00:07:33.985 INFO: autodetecting backend as ninja 00:07:33.985 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:34.243 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:52.319 The Meson build system 00:07:52.319 Version: 1.5.0 00:07:52.319 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:52.319 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:52.319 Build type: native build 00:07:52.319 Program cat found: YES (/usr/bin/cat) 00:07:52.319 Project name: DPDK 00:07:52.319 Project version: 24.03.0 00:07:52.319 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:52.319 C linker for the host machine: cc ld.bfd 2.40-14 00:07:52.319 Host machine cpu family: x86_64 00:07:52.319 Host machine cpu: x86_64 00:07:52.319 Message: ## Building in Developer Mode ## 00:07:52.319 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:52.319 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:52.319 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:52.319 Program python3 found: YES (/usr/bin/python3) 00:07:52.319 Program cat found: YES (/usr/bin/cat) 00:07:52.319 Compiler for C supports arguments -march=native: YES 00:07:52.319 Checking for size of "void *" : 8 00:07:52.319 Checking for size of "void *" : 8 (cached) 00:07:52.319 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:52.319 Library m found: YES 00:07:52.319 Library numa found: YES 00:07:52.319 Has header "numaif.h" : YES 00:07:52.319 Library fdt found: NO 00:07:52.319 Library execinfo found: NO 00:07:52.319 Has header "execinfo.h" : YES 00:07:52.319 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:52.319 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:52.319 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:52.319 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:52.319 Run-time dependency openssl found: YES 3.1.1 00:07:52.319 Run-time dependency libpcap found: YES 1.10.4 00:07:52.319 Has header "pcap.h" with dependency libpcap: YES 00:07:52.319 Compiler for C supports arguments -Wcast-qual: YES 00:07:52.320 Compiler for C supports arguments -Wdeprecated: YES 00:07:52.320 Compiler for C supports arguments -Wformat: YES 00:07:52.320 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:52.320 Compiler for C supports arguments -Wformat-security: NO 00:07:52.320 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:52.320 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:52.320 Compiler for C supports arguments -Wnested-externs: YES 00:07:52.320 Compiler for C supports arguments -Wold-style-definition: YES 00:07:52.320 Compiler for C supports arguments -Wpointer-arith: YES 00:07:52.320 Compiler for C supports arguments -Wsign-compare: YES 00:07:52.320 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:52.320 Compiler for C supports arguments -Wundef: YES 00:07:52.320 Compiler for C supports arguments -Wwrite-strings: YES 00:07:52.320 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:52.320 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:52.320 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:52.320 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:52.320 Program objdump found: YES (/usr/bin/objdump) 00:07:52.320 Compiler for C supports arguments -mavx512f: YES 00:07:52.320 Checking if "AVX512 checking" compiles: YES 00:07:52.320 Fetching value of define "__SSE4_2__" : 1 00:07:52.320 Fetching value of define "__AES__" : 1 00:07:52.320 Fetching value of define "__AVX__" : 1 00:07:52.320 Fetching value of define "__AVX2__" : 1 00:07:52.320 Fetching value of define "__AVX512BW__" : (undefined) 00:07:52.320 Fetching value of define "__AVX512CD__" : (undefined) 00:07:52.320 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:52.320 Fetching value of define "__AVX512F__" : (undefined) 00:07:52.320 Fetching value of define "__AVX512VL__" : (undefined) 00:07:52.320 Fetching value of define "__PCLMUL__" : 1 00:07:52.320 Fetching value of define "__RDRND__" : 1 00:07:52.320 Fetching value of define "__RDSEED__" : 1 00:07:52.320 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:52.320 Fetching value of define "__znver1__" : (undefined) 00:07:52.320 Fetching value of define "__znver2__" : (undefined) 00:07:52.320 Fetching value of define "__znver3__" : (undefined) 00:07:52.320 Fetching value of define "__znver4__" : (undefined) 00:07:52.320 Library asan found: YES 00:07:52.320 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:52.320 Message: lib/log: Defining dependency "log" 00:07:52.320 Message: lib/kvargs: Defining dependency "kvargs" 00:07:52.320 Message: lib/telemetry: Defining dependency "telemetry" 00:07:52.320 Library rt found: YES 00:07:52.320 Checking for function "getentropy" : NO 00:07:52.320 Message: lib/eal: Defining dependency "eal" 00:07:52.320 Message: lib/ring: Defining dependency "ring" 00:07:52.320 Message: lib/rcu: Defining dependency "rcu" 00:07:52.320 Message: lib/mempool: Defining dependency "mempool" 00:07:52.320 Message: lib/mbuf: Defining dependency "mbuf" 00:07:52.320 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:52.320 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:52.320 Compiler for C supports arguments -mpclmul: YES 00:07:52.320 Compiler for C supports arguments -maes: YES 00:07:52.320 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:52.320 Compiler for C supports arguments -mavx512bw: YES 00:07:52.320 Compiler for C supports arguments -mavx512dq: YES 00:07:52.320 Compiler for C supports arguments -mavx512vl: YES 00:07:52.320 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:52.320 Compiler for C supports arguments -mavx2: YES 00:07:52.320 Compiler for C supports arguments -mavx: YES 00:07:52.320 Message: lib/net: Defining dependency "net" 00:07:52.320 Message: lib/meter: Defining dependency "meter" 00:07:52.320 Message: lib/ethdev: Defining dependency "ethdev" 00:07:52.320 Message: lib/pci: Defining dependency "pci" 00:07:52.320 Message: lib/cmdline: Defining dependency "cmdline" 00:07:52.320 Message: lib/hash: Defining dependency "hash" 00:07:52.320 Message: lib/timer: Defining dependency "timer" 00:07:52.320 Message: lib/compressdev: Defining dependency "compressdev" 00:07:52.320 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:52.320 Message: lib/dmadev: Defining dependency "dmadev" 00:07:52.320 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:52.320 Message: lib/power: Defining dependency "power" 00:07:52.320 Message: lib/reorder: Defining dependency "reorder" 00:07:52.320 Message: lib/security: Defining dependency "security" 00:07:52.320 Has header "linux/userfaultfd.h" : YES 00:07:52.320 Has header "linux/vduse.h" : YES 00:07:52.320 Message: lib/vhost: Defining dependency "vhost" 00:07:52.320 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:52.320 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:52.320 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:52.320 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:52.320 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:52.320 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:52.320 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:52.320 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:52.320 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:52.320 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:52.320 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:52.320 Configuring doxy-api-html.conf using configuration 00:07:52.320 Configuring doxy-api-man.conf using configuration 00:07:52.320 Program mandb found: YES (/usr/bin/mandb) 00:07:52.320 Program sphinx-build found: NO 00:07:52.320 Configuring rte_build_config.h using configuration 00:07:52.320 Message: 00:07:52.320 ================= 00:07:52.320 Applications Enabled 00:07:52.320 ================= 00:07:52.320 00:07:52.320 apps: 00:07:52.320 00:07:52.320 00:07:52.320 Message: 00:07:52.320 ================= 00:07:52.320 Libraries Enabled 00:07:52.320 ================= 00:07:52.320 00:07:52.320 libs: 00:07:52.320 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:52.320 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:52.320 cryptodev, dmadev, power, reorder, security, vhost, 00:07:52.320 00:07:52.320 Message: 00:07:52.320 =============== 00:07:52.320 Drivers Enabled 00:07:52.320 =============== 00:07:52.320 00:07:52.320 common: 00:07:52.320 00:07:52.320 bus: 00:07:52.320 pci, vdev, 00:07:52.320 mempool: 00:07:52.320 ring, 00:07:52.320 dma: 00:07:52.320 00:07:52.320 net: 00:07:52.320 00:07:52.320 crypto: 00:07:52.320 00:07:52.320 compress: 00:07:52.320 00:07:52.320 vdpa: 00:07:52.320 00:07:52.320 00:07:52.320 Message: 00:07:52.320 ================= 00:07:52.320 Content Skipped 00:07:52.320 ================= 00:07:52.320 00:07:52.320 apps: 00:07:52.320 dumpcap: explicitly disabled via build config 00:07:52.320 graph: explicitly disabled via build config 00:07:52.320 pdump: explicitly disabled via build config 00:07:52.320 proc-info: explicitly disabled via build config 00:07:52.320 test-acl: explicitly disabled via build config 00:07:52.320 test-bbdev: explicitly disabled via build config 00:07:52.320 test-cmdline: explicitly disabled via build config 00:07:52.320 test-compress-perf: explicitly disabled via build config 00:07:52.320 test-crypto-perf: explicitly disabled via build config 00:07:52.320 test-dma-perf: explicitly disabled via build config 00:07:52.320 test-eventdev: explicitly disabled via build config 00:07:52.320 test-fib: explicitly disabled via build config 00:07:52.320 test-flow-perf: explicitly disabled via build config 00:07:52.320 test-gpudev: explicitly disabled via build config 00:07:52.320 test-mldev: explicitly disabled via build config 00:07:52.320 test-pipeline: explicitly disabled via build config 00:07:52.320 test-pmd: explicitly disabled via build config 00:07:52.320 test-regex: explicitly disabled via build config 00:07:52.320 test-sad: explicitly disabled via build config 00:07:52.320 test-security-perf: explicitly disabled via build config 00:07:52.320 00:07:52.320 libs: 00:07:52.320 argparse: explicitly disabled via build config 00:07:52.320 metrics: explicitly disabled via build config 00:07:52.320 acl: explicitly disabled via build config 00:07:52.320 bbdev: explicitly disabled via build config 00:07:52.320 bitratestats: explicitly disabled via build config 00:07:52.320 bpf: explicitly disabled via build config 00:07:52.320 cfgfile: explicitly disabled via build config 00:07:52.320 distributor: explicitly disabled via build config 00:07:52.320 efd: explicitly disabled via build config 00:07:52.320 eventdev: explicitly disabled via build config 00:07:52.320 dispatcher: explicitly disabled via build config 00:07:52.320 gpudev: explicitly disabled via build config 00:07:52.320 gro: explicitly disabled via build config 00:07:52.320 gso: explicitly disabled via build config 00:07:52.320 ip_frag: explicitly disabled via build config 00:07:52.320 jobstats: explicitly disabled via build config 00:07:52.320 latencystats: explicitly disabled via build config 00:07:52.320 lpm: explicitly disabled via build config 00:07:52.320 member: explicitly disabled via build config 00:07:52.320 pcapng: explicitly disabled via build config 00:07:52.320 rawdev: explicitly disabled via build config 00:07:52.320 regexdev: explicitly disabled via build config 00:07:52.320 mldev: explicitly disabled via build config 00:07:52.320 rib: explicitly disabled via build config 00:07:52.320 sched: explicitly disabled via build config 00:07:52.320 stack: explicitly disabled via build config 00:07:52.320 ipsec: explicitly disabled via build config 00:07:52.320 pdcp: explicitly disabled via build config 00:07:52.320 fib: explicitly disabled via build config 00:07:52.320 port: explicitly disabled via build config 00:07:52.320 pdump: explicitly disabled via build config 00:07:52.320 table: explicitly disabled via build config 00:07:52.320 pipeline: explicitly disabled via build config 00:07:52.320 graph: explicitly disabled via build config 00:07:52.320 node: explicitly disabled via build config 00:07:52.320 00:07:52.320 drivers: 00:07:52.321 common/cpt: not in enabled drivers build config 00:07:52.321 common/dpaax: not in enabled drivers build config 00:07:52.321 common/iavf: not in enabled drivers build config 00:07:52.321 common/idpf: not in enabled drivers build config 00:07:52.321 common/ionic: not in enabled drivers build config 00:07:52.321 common/mvep: not in enabled drivers build config 00:07:52.321 common/octeontx: not in enabled drivers build config 00:07:52.321 bus/auxiliary: not in enabled drivers build config 00:07:52.321 bus/cdx: not in enabled drivers build config 00:07:52.321 bus/dpaa: not in enabled drivers build config 00:07:52.321 bus/fslmc: not in enabled drivers build config 00:07:52.321 bus/ifpga: not in enabled drivers build config 00:07:52.321 bus/platform: not in enabled drivers build config 00:07:52.321 bus/uacce: not in enabled drivers build config 00:07:52.321 bus/vmbus: not in enabled drivers build config 00:07:52.321 common/cnxk: not in enabled drivers build config 00:07:52.321 common/mlx5: not in enabled drivers build config 00:07:52.321 common/nfp: not in enabled drivers build config 00:07:52.321 common/nitrox: not in enabled drivers build config 00:07:52.321 common/qat: not in enabled drivers build config 00:07:52.321 common/sfc_efx: not in enabled drivers build config 00:07:52.321 mempool/bucket: not in enabled drivers build config 00:07:52.321 mempool/cnxk: not in enabled drivers build config 00:07:52.321 mempool/dpaa: not in enabled drivers build config 00:07:52.321 mempool/dpaa2: not in enabled drivers build config 00:07:52.321 mempool/octeontx: not in enabled drivers build config 00:07:52.321 mempool/stack: not in enabled drivers build config 00:07:52.321 dma/cnxk: not in enabled drivers build config 00:07:52.321 dma/dpaa: not in enabled drivers build config 00:07:52.321 dma/dpaa2: not in enabled drivers build config 00:07:52.321 dma/hisilicon: not in enabled drivers build config 00:07:52.321 dma/idxd: not in enabled drivers build config 00:07:52.321 dma/ioat: not in enabled drivers build config 00:07:52.321 dma/skeleton: not in enabled drivers build config 00:07:52.321 net/af_packet: not in enabled drivers build config 00:07:52.321 net/af_xdp: not in enabled drivers build config 00:07:52.321 net/ark: not in enabled drivers build config 00:07:52.321 net/atlantic: not in enabled drivers build config 00:07:52.321 net/avp: not in enabled drivers build config 00:07:52.321 net/axgbe: not in enabled drivers build config 00:07:52.321 net/bnx2x: not in enabled drivers build config 00:07:52.321 net/bnxt: not in enabled drivers build config 00:07:52.321 net/bonding: not in enabled drivers build config 00:07:52.321 net/cnxk: not in enabled drivers build config 00:07:52.321 net/cpfl: not in enabled drivers build config 00:07:52.321 net/cxgbe: not in enabled drivers build config 00:07:52.321 net/dpaa: not in enabled drivers build config 00:07:52.321 net/dpaa2: not in enabled drivers build config 00:07:52.321 net/e1000: not in enabled drivers build config 00:07:52.321 net/ena: not in enabled drivers build config 00:07:52.321 net/enetc: not in enabled drivers build config 00:07:52.321 net/enetfec: not in enabled drivers build config 00:07:52.321 net/enic: not in enabled drivers build config 00:07:52.321 net/failsafe: not in enabled drivers build config 00:07:52.321 net/fm10k: not in enabled drivers build config 00:07:52.321 net/gve: not in enabled drivers build config 00:07:52.321 net/hinic: not in enabled drivers build config 00:07:52.321 net/hns3: not in enabled drivers build config 00:07:52.321 net/i40e: not in enabled drivers build config 00:07:52.321 net/iavf: not in enabled drivers build config 00:07:52.321 net/ice: not in enabled drivers build config 00:07:52.321 net/idpf: not in enabled drivers build config 00:07:52.321 net/igc: not in enabled drivers build config 00:07:52.321 net/ionic: not in enabled drivers build config 00:07:52.321 net/ipn3ke: not in enabled drivers build config 00:07:52.321 net/ixgbe: not in enabled drivers build config 00:07:52.321 net/mana: not in enabled drivers build config 00:07:52.321 net/memif: not in enabled drivers build config 00:07:52.321 net/mlx4: not in enabled drivers build config 00:07:52.321 net/mlx5: not in enabled drivers build config 00:07:52.321 net/mvneta: not in enabled drivers build config 00:07:52.321 net/mvpp2: not in enabled drivers build config 00:07:52.321 net/netvsc: not in enabled drivers build config 00:07:52.321 net/nfb: not in enabled drivers build config 00:07:52.321 net/nfp: not in enabled drivers build config 00:07:52.321 net/ngbe: not in enabled drivers build config 00:07:52.321 net/null: not in enabled drivers build config 00:07:52.321 net/octeontx: not in enabled drivers build config 00:07:52.321 net/octeon_ep: not in enabled drivers build config 00:07:52.321 net/pcap: not in enabled drivers build config 00:07:52.321 net/pfe: not in enabled drivers build config 00:07:52.321 net/qede: not in enabled drivers build config 00:07:52.321 net/ring: not in enabled drivers build config 00:07:52.321 net/sfc: not in enabled drivers build config 00:07:52.321 net/softnic: not in enabled drivers build config 00:07:52.321 net/tap: not in enabled drivers build config 00:07:52.321 net/thunderx: not in enabled drivers build config 00:07:52.321 net/txgbe: not in enabled drivers build config 00:07:52.321 net/vdev_netvsc: not in enabled drivers build config 00:07:52.321 net/vhost: not in enabled drivers build config 00:07:52.321 net/virtio: not in enabled drivers build config 00:07:52.321 net/vmxnet3: not in enabled drivers build config 00:07:52.321 raw/*: missing internal dependency, "rawdev" 00:07:52.321 crypto/armv8: not in enabled drivers build config 00:07:52.321 crypto/bcmfs: not in enabled drivers build config 00:07:52.321 crypto/caam_jr: not in enabled drivers build config 00:07:52.321 crypto/ccp: not in enabled drivers build config 00:07:52.321 crypto/cnxk: not in enabled drivers build config 00:07:52.321 crypto/dpaa_sec: not in enabled drivers build config 00:07:52.321 crypto/dpaa2_sec: not in enabled drivers build config 00:07:52.321 crypto/ipsec_mb: not in enabled drivers build config 00:07:52.321 crypto/mlx5: not in enabled drivers build config 00:07:52.321 crypto/mvsam: not in enabled drivers build config 00:07:52.321 crypto/nitrox: not in enabled drivers build config 00:07:52.321 crypto/null: not in enabled drivers build config 00:07:52.321 crypto/octeontx: not in enabled drivers build config 00:07:52.321 crypto/openssl: not in enabled drivers build config 00:07:52.321 crypto/scheduler: not in enabled drivers build config 00:07:52.321 crypto/uadk: not in enabled drivers build config 00:07:52.321 crypto/virtio: not in enabled drivers build config 00:07:52.321 compress/isal: not in enabled drivers build config 00:07:52.321 compress/mlx5: not in enabled drivers build config 00:07:52.321 compress/nitrox: not in enabled drivers build config 00:07:52.321 compress/octeontx: not in enabled drivers build config 00:07:52.321 compress/zlib: not in enabled drivers build config 00:07:52.321 regex/*: missing internal dependency, "regexdev" 00:07:52.321 ml/*: missing internal dependency, "mldev" 00:07:52.321 vdpa/ifc: not in enabled drivers build config 00:07:52.321 vdpa/mlx5: not in enabled drivers build config 00:07:52.321 vdpa/nfp: not in enabled drivers build config 00:07:52.321 vdpa/sfc: not in enabled drivers build config 00:07:52.321 event/*: missing internal dependency, "eventdev" 00:07:52.321 baseband/*: missing internal dependency, "bbdev" 00:07:52.321 gpu/*: missing internal dependency, "gpudev" 00:07:52.321 00:07:52.321 00:07:52.321 Build targets in project: 85 00:07:52.321 00:07:52.321 DPDK 24.03.0 00:07:52.321 00:07:52.321 User defined options 00:07:52.321 buildtype : debug 00:07:52.321 default_library : shared 00:07:52.321 libdir : lib 00:07:52.321 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:52.321 b_sanitize : address 00:07:52.321 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:52.321 c_link_args : 00:07:52.321 cpu_instruction_set: native 00:07:52.321 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:52.321 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:52.321 enable_docs : false 00:07:52.321 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:52.321 enable_kmods : false 00:07:52.321 max_lcores : 128 00:07:52.321 tests : false 00:07:52.321 00:07:52.321 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:52.321 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:52.321 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:52.321 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:52.321 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:52.321 [4/268] Linking static target lib/librte_kvargs.a 00:07:52.321 [5/268] Linking static target lib/librte_log.a 00:07:52.321 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:52.580 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.580 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:52.838 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:53.096 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:53.096 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:53.096 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:53.096 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:53.354 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.354 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:53.354 [16/268] Linking target lib/librte_log.so.24.1 00:07:53.612 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:53.613 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:53.613 [19/268] Linking static target lib/librte_telemetry.a 00:07:53.613 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:53.871 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:54.149 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:54.411 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:54.669 [24/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.927 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:54.927 [26/268] Linking target lib/librte_telemetry.so.24.1 00:07:54.927 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:54.927 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:54.927 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:54.927 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:55.192 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:55.192 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:55.192 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:55.192 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:55.461 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:55.461 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:55.719 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:56.346 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:56.346 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:56.346 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:56.604 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:56.604 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:56.604 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:56.604 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:56.872 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:56.872 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:57.452 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:57.452 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:57.715 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:57.715 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:57.715 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:57.973 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:57.973 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:58.540 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:58.540 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:58.798 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:58.798 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:58.798 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:59.057 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:59.316 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:59.316 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:59.316 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:59.574 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:59.574 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:59.832 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:00.089 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:00.089 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:00.347 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:00.604 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:00.864 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:01.122 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:01.122 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:01.122 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:01.122 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:01.122 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:01.122 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:01.395 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:01.395 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:01.395 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:01.668 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:01.668 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:02.240 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:02.240 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:02.240 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:02.240 [85/268] Linking static target lib/librte_eal.a 00:08:02.498 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:02.498 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:03.065 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:03.065 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:03.065 [90/268] Linking static target lib/librte_ring.a 00:08:03.065 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:03.325 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:03.583 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:03.583 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:03.583 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:03.583 [96/268] Linking static target lib/librte_rcu.a 00:08:03.583 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:03.841 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:03.841 [99/268] Linking static target lib/librte_mempool.a 00:08:03.841 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:04.407 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:04.407 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:04.407 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:04.665 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:04.665 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:04.923 [106/268] Linking static target lib/librte_mbuf.a 00:08:04.923 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:05.185 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:05.185 [109/268] Linking static target lib/librte_net.a 00:08:05.444 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:05.444 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:05.444 [112/268] Linking static target lib/librte_meter.a 00:08:05.702 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:05.702 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:05.702 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:05.960 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:06.218 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.218 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:06.475 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.734 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:07.043 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:07.302 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:07.302 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:08.239 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:08.239 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:08.239 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:08.239 [127/268] Linking static target lib/librte_pci.a 00:08:08.239 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:08.239 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:08.498 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:08.498 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:08.498 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:08.498 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:08.756 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:08.756 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:08.756 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:08.756 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:08.756 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:08.756 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:09.015 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.015 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:09.015 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:09.015 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:09.015 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:09.583 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:09.841 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:09.841 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:10.100 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:10.100 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:10.100 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:10.100 [151/268] Linking static target lib/librte_cmdline.a 00:08:10.358 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:10.358 [153/268] Linking static target lib/librte_timer.a 00:08:10.924 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:11.182 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:11.182 [156/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.182 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:11.182 [158/268] Linking static target lib/librte_ethdev.a 00:08:11.182 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:11.441 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:11.699 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:12.265 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:12.265 [163/268] Linking static target lib/librte_hash.a 00:08:12.265 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:12.265 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:12.265 [166/268] Linking static target lib/librte_compressdev.a 00:08:12.265 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:12.524 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:12.524 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:12.524 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:12.524 [171/268] Linking static target lib/librte_dmadev.a 00:08:13.090 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:13.090 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:13.660 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:13.660 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.919 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.919 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:14.177 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:14.177 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:14.177 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:14.435 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:14.435 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:14.435 [183/268] Linking static target lib/librte_cryptodev.a 00:08:14.693 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:14.951 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:14.951 [186/268] Linking static target lib/librte_power.a 00:08:15.516 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:15.775 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:15.775 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:15.775 [190/268] Linking static target lib/librte_reorder.a 00:08:15.775 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:16.033 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:16.033 [193/268] Linking static target lib/librte_security.a 00:08:16.968 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.968 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.968 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:17.226 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.226 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:18.160 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:18.160 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:18.418 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:18.418 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:18.687 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:18.687 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:19.253 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:19.253 [206/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.253 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:19.512 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:19.512 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:19.512 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:19.512 [211/268] Linking target lib/librte_eal.so.24.1 00:08:19.512 [212/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:19.512 [213/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:19.771 [214/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:20.030 [215/268] Linking target lib/librte_pci.so.24.1 00:08:20.030 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:20.030 [217/268] Linking target lib/librte_meter.so.24.1 00:08:20.030 [218/268] Linking target lib/librte_timer.so.24.1 00:08:20.030 [219/268] Linking target lib/librte_ring.so.24.1 00:08:20.030 [220/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:20.030 [221/268] Linking target lib/librte_dmadev.so.24.1 00:08:20.030 [222/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:20.030 [223/268] Linking static target drivers/librte_bus_pci.a 00:08:20.288 [224/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:20.288 [225/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:20.288 [226/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:20.288 [227/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:20.288 [228/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:20.288 [229/268] Linking static target drivers/librte_bus_vdev.a 00:08:20.288 [230/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:20.288 [231/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:20.288 [232/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:20.288 [233/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:20.288 [234/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:20.546 [235/268] Linking target lib/librte_rcu.so.24.1 00:08:20.546 [236/268] Linking target lib/librte_mempool.so.24.1 00:08:20.804 [237/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:20.804 [238/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:20.804 [239/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:20.804 [240/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:20.804 [241/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:20.804 [242/268] Linking static target drivers/librte_mempool_ring.a 00:08:20.804 [243/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:20.804 [244/268] Linking target lib/librte_mbuf.so.24.1 00:08:20.804 [245/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.063 [246/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:21.063 [247/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.063 [248/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:21.063 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:21.063 [250/268] Linking target lib/librte_net.so.24.1 00:08:21.063 [251/268] Linking target lib/librte_compressdev.so.24.1 00:08:21.063 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:08:21.320 [253/268] Linking target lib/librte_reorder.so.24.1 00:08:21.320 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:21.320 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:21.577 [256/268] Linking target lib/librte_hash.so.24.1 00:08:21.577 [257/268] Linking target lib/librte_security.so.24.1 00:08:21.577 [258/268] Linking target lib/librte_cmdline.so.24.1 00:08:21.577 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:22.953 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.953 [261/268] Linking target lib/librte_ethdev.so.24.1 00:08:23.211 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:23.211 [263/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:23.211 [264/268] Linking target lib/librte_power.so.24.1 00:08:29.764 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:29.764 [266/268] Linking static target lib/librte_vhost.a 00:08:30.698 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.956 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:30.956 INFO: autodetecting backend as ninja 00:08:30.956 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:57.545 CC lib/ut_mock/mock.o 00:08:57.545 CC lib/log/log.o 00:08:57.545 CC lib/log/log_deprecated.o 00:08:57.545 CC lib/log/log_flags.o 00:08:57.545 CC lib/ut/ut.o 00:08:57.545 LIB libspdk_ut_mock.a 00:08:57.545 SO libspdk_ut_mock.so.6.0 00:08:57.545 LIB libspdk_log.a 00:08:57.545 SYMLINK libspdk_ut_mock.so 00:08:57.545 LIB libspdk_ut.a 00:08:57.545 SO libspdk_log.so.7.1 00:08:57.545 SO libspdk_ut.so.2.0 00:08:57.545 SYMLINK libspdk_log.so 00:08:57.545 SYMLINK libspdk_ut.so 00:08:57.545 CXX lib/trace_parser/trace.o 00:08:57.545 CC lib/dma/dma.o 00:08:57.545 CC lib/util/bit_array.o 00:08:57.545 CC lib/util/base64.o 00:08:57.545 CC lib/util/crc16.o 00:08:57.545 CC lib/util/crc32.o 00:08:57.545 CC lib/util/cpuset.o 00:08:57.545 CC lib/util/crc32c.o 00:08:57.545 CC lib/ioat/ioat.o 00:08:57.545 CC lib/vfio_user/host/vfio_user_pci.o 00:08:57.545 CC lib/vfio_user/host/vfio_user.o 00:08:57.545 CC lib/util/crc32_ieee.o 00:08:57.545 CC lib/util/crc64.o 00:08:57.545 CC lib/util/dif.o 00:08:57.545 CC lib/util/fd.o 00:08:57.545 LIB libspdk_dma.a 00:08:57.545 SO libspdk_dma.so.5.0 00:08:57.545 CC lib/util/fd_group.o 00:08:57.545 CC lib/util/file.o 00:08:57.545 CC lib/util/hexlify.o 00:08:57.545 SYMLINK libspdk_dma.so 00:08:57.545 CC lib/util/iov.o 00:08:57.545 CC lib/util/math.o 00:08:57.545 CC lib/util/net.o 00:08:57.545 LIB libspdk_vfio_user.a 00:08:57.545 SO libspdk_vfio_user.so.5.0 00:08:57.545 CC lib/util/pipe.o 00:08:57.545 LIB libspdk_ioat.a 00:08:57.545 SO libspdk_ioat.so.7.0 00:08:57.545 SYMLINK libspdk_vfio_user.so 00:08:57.545 CC lib/util/strerror_tls.o 00:08:57.545 CC lib/util/string.o 00:08:57.545 SYMLINK libspdk_ioat.so 00:08:57.545 CC lib/util/uuid.o 00:08:57.545 CC lib/util/xor.o 00:08:57.545 CC lib/util/zipf.o 00:08:57.545 CC lib/util/md5.o 00:08:57.545 LIB libspdk_util.a 00:08:57.545 SO libspdk_util.so.10.1 00:08:57.545 LIB libspdk_trace_parser.a 00:08:57.545 SO libspdk_trace_parser.so.6.0 00:08:57.545 SYMLINK libspdk_util.so 00:08:57.545 SYMLINK libspdk_trace_parser.so 00:08:57.545 CC lib/vmd/vmd.o 00:08:57.545 CC lib/vmd/led.o 00:08:57.545 CC lib/conf/conf.o 00:08:57.545 CC lib/rdma_utils/rdma_utils.o 00:08:57.545 CC lib/json/json_parse.o 00:08:57.545 CC lib/json/json_write.o 00:08:57.545 CC lib/json/json_util.o 00:08:57.545 CC lib/env_dpdk/env.o 00:08:57.545 CC lib/env_dpdk/memory.o 00:08:57.545 CC lib/idxd/idxd.o 00:08:57.805 CC lib/idxd/idxd_user.o 00:08:57.805 CC lib/idxd/idxd_kernel.o 00:08:58.065 LIB libspdk_conf.a 00:08:58.065 SO libspdk_conf.so.6.0 00:08:58.065 CC lib/env_dpdk/pci.o 00:08:58.065 CC lib/env_dpdk/init.o 00:08:58.065 LIB libspdk_rdma_utils.a 00:08:58.065 SYMLINK libspdk_conf.so 00:08:58.065 CC lib/env_dpdk/threads.o 00:08:58.065 LIB libspdk_json.a 00:08:58.065 SO libspdk_rdma_utils.so.1.0 00:08:58.065 SO libspdk_json.so.6.0 00:08:58.330 SYMLINK libspdk_rdma_utils.so 00:08:58.330 SYMLINK libspdk_json.so 00:08:58.330 CC lib/env_dpdk/pci_ioat.o 00:08:58.330 CC lib/env_dpdk/pci_virtio.o 00:08:58.330 CC lib/env_dpdk/pci_vmd.o 00:08:58.587 CC lib/rdma_provider/common.o 00:08:58.587 CC lib/env_dpdk/pci_idxd.o 00:08:58.587 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:58.587 CC lib/env_dpdk/pci_event.o 00:08:58.587 LIB libspdk_vmd.a 00:08:58.587 CC lib/env_dpdk/sigbus_handler.o 00:08:58.587 SO libspdk_vmd.so.6.0 00:08:58.587 CC lib/env_dpdk/pci_dpdk.o 00:08:58.846 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:58.846 SYMLINK libspdk_vmd.so 00:08:58.846 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:58.846 CC lib/jsonrpc/jsonrpc_server.o 00:08:58.846 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:58.846 CC lib/jsonrpc/jsonrpc_client.o 00:08:58.846 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:58.846 LIB libspdk_rdma_provider.a 00:08:58.846 SO libspdk_rdma_provider.so.7.0 00:08:58.846 LIB libspdk_idxd.a 00:08:59.104 SO libspdk_idxd.so.12.1 00:08:59.104 SYMLINK libspdk_rdma_provider.so 00:08:59.104 SYMLINK libspdk_idxd.so 00:08:59.362 LIB libspdk_jsonrpc.a 00:08:59.362 SO libspdk_jsonrpc.so.6.0 00:08:59.621 SYMLINK libspdk_jsonrpc.so 00:08:59.621 CC lib/rpc/rpc.o 00:09:00.186 LIB libspdk_rpc.a 00:09:00.186 SO libspdk_rpc.so.6.0 00:09:00.186 LIB libspdk_env_dpdk.a 00:09:00.186 SYMLINK libspdk_rpc.so 00:09:00.186 SO libspdk_env_dpdk.so.15.1 00:09:00.444 CC lib/notify/notify.o 00:09:00.444 CC lib/notify/notify_rpc.o 00:09:00.444 CC lib/keyring/keyring.o 00:09:00.444 CC lib/keyring/keyring_rpc.o 00:09:00.444 CC lib/trace/trace.o 00:09:00.444 CC lib/trace/trace_rpc.o 00:09:00.444 CC lib/trace/trace_flags.o 00:09:00.444 SYMLINK libspdk_env_dpdk.so 00:09:00.702 LIB libspdk_notify.a 00:09:00.702 SO libspdk_notify.so.6.0 00:09:00.702 LIB libspdk_keyring.a 00:09:00.702 SYMLINK libspdk_notify.so 00:09:00.960 SO libspdk_keyring.so.2.0 00:09:00.960 SYMLINK libspdk_keyring.so 00:09:00.960 LIB libspdk_trace.a 00:09:00.960 SO libspdk_trace.so.11.0 00:09:00.960 SYMLINK libspdk_trace.so 00:09:01.218 CC lib/sock/sock.o 00:09:01.218 CC lib/sock/sock_rpc.o 00:09:01.218 CC lib/thread/thread.o 00:09:01.218 CC lib/thread/iobuf.o 00:09:02.149 LIB libspdk_sock.a 00:09:02.149 SO libspdk_sock.so.10.0 00:09:02.149 SYMLINK libspdk_sock.so 00:09:02.408 CC lib/nvme/nvme_ctrlr.o 00:09:02.408 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:02.408 CC lib/nvme/nvme_fabric.o 00:09:02.408 CC lib/nvme/nvme_ns_cmd.o 00:09:02.408 CC lib/nvme/nvme_ns.o 00:09:02.408 CC lib/nvme/nvme_pcie_common.o 00:09:02.408 CC lib/nvme/nvme_pcie.o 00:09:02.408 CC lib/nvme/nvme_qpair.o 00:09:02.408 CC lib/nvme/nvme.o 00:09:04.307 CC lib/nvme/nvme_quirks.o 00:09:04.307 CC lib/nvme/nvme_transport.o 00:09:04.307 CC lib/nvme/nvme_discovery.o 00:09:04.307 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:04.307 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:04.307 CC lib/nvme/nvme_tcp.o 00:09:04.307 CC lib/nvme/nvme_opal.o 00:09:04.565 CC lib/nvme/nvme_io_msg.o 00:09:04.565 LIB libspdk_thread.a 00:09:04.565 SO libspdk_thread.so.11.0 00:09:04.823 SYMLINK libspdk_thread.so 00:09:05.081 CC lib/accel/accel.o 00:09:05.081 CC lib/blob/blobstore.o 00:09:05.081 CC lib/nvme/nvme_poll_group.o 00:09:05.339 CC lib/accel/accel_rpc.o 00:09:05.339 CC lib/nvme/nvme_zns.o 00:09:05.597 CC lib/nvme/nvme_stubs.o 00:09:05.597 CC lib/blob/request.o 00:09:05.855 CC lib/blob/zeroes.o 00:09:05.855 CC lib/init/json_config.o 00:09:06.113 CC lib/init/subsystem.o 00:09:06.113 CC lib/nvme/nvme_auth.o 00:09:06.113 CC lib/init/subsystem_rpc.o 00:09:06.371 CC lib/blob/blob_bs_dev.o 00:09:06.371 CC lib/init/rpc.o 00:09:06.371 CC lib/nvme/nvme_cuse.o 00:09:06.629 CC lib/accel/accel_sw.o 00:09:06.629 CC lib/virtio/virtio.o 00:09:06.629 CC lib/fsdev/fsdev.o 00:09:06.629 LIB libspdk_init.a 00:09:06.629 SO libspdk_init.so.6.0 00:09:06.887 SYMLINK libspdk_init.so 00:09:06.887 CC lib/fsdev/fsdev_io.o 00:09:07.145 CC lib/virtio/virtio_vhost_user.o 00:09:07.145 CC lib/fsdev/fsdev_rpc.o 00:09:07.402 CC lib/nvme/nvme_rdma.o 00:09:07.402 CC lib/virtio/virtio_vfio_user.o 00:09:07.402 CC lib/event/app.o 00:09:07.659 LIB libspdk_accel.a 00:09:07.659 LIB libspdk_fsdev.a 00:09:07.659 SO libspdk_accel.so.16.0 00:09:07.659 CC lib/event/reactor.o 00:09:07.659 CC lib/event/log_rpc.o 00:09:07.659 SO libspdk_fsdev.so.2.0 00:09:07.660 SYMLINK libspdk_accel.so 00:09:07.660 CC lib/virtio/virtio_pci.o 00:09:07.660 CC lib/event/app_rpc.o 00:09:07.660 CC lib/event/scheduler_static.o 00:09:07.660 SYMLINK libspdk_fsdev.so 00:09:07.917 CC lib/bdev/bdev_rpc.o 00:09:07.917 CC lib/bdev/bdev_zone.o 00:09:07.917 CC lib/bdev/bdev.o 00:09:07.917 CC lib/bdev/part.o 00:09:08.175 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:08.175 LIB libspdk_virtio.a 00:09:08.175 SO libspdk_virtio.so.7.0 00:09:08.175 SYMLINK libspdk_virtio.so 00:09:08.175 CC lib/bdev/scsi_nvme.o 00:09:08.433 LIB libspdk_event.a 00:09:08.433 SO libspdk_event.so.14.0 00:09:08.433 SYMLINK libspdk_event.so 00:09:09.000 LIB libspdk_fuse_dispatcher.a 00:09:09.000 SO libspdk_fuse_dispatcher.so.1.0 00:09:09.000 SYMLINK libspdk_fuse_dispatcher.so 00:09:09.569 LIB libspdk_nvme.a 00:09:09.569 SO libspdk_nvme.so.15.0 00:09:10.137 SYMLINK libspdk_nvme.so 00:09:10.137 LIB libspdk_blob.a 00:09:10.137 SO libspdk_blob.so.11.0 00:09:10.395 SYMLINK libspdk_blob.so 00:09:10.653 CC lib/blobfs/blobfs.o 00:09:10.653 CC lib/blobfs/tree.o 00:09:10.653 CC lib/lvol/lvol.o 00:09:12.026 LIB libspdk_blobfs.a 00:09:12.026 SO libspdk_blobfs.so.10.0 00:09:12.026 SYMLINK libspdk_blobfs.so 00:09:12.026 LIB libspdk_bdev.a 00:09:12.026 LIB libspdk_lvol.a 00:09:12.026 SO libspdk_bdev.so.17.0 00:09:12.026 SO libspdk_lvol.so.10.0 00:09:12.282 SYMLINK libspdk_lvol.so 00:09:12.282 SYMLINK libspdk_bdev.so 00:09:12.282 CC lib/nbd/nbd.o 00:09:12.282 CC lib/nbd/nbd_rpc.o 00:09:12.282 CC lib/nvmf/ctrlr.o 00:09:12.282 CC lib/nvmf/ctrlr_discovery.o 00:09:12.282 CC lib/ublk/ublk.o 00:09:12.282 CC lib/ftl/ftl_core.o 00:09:12.282 CC lib/nvmf/ctrlr_bdev.o 00:09:12.282 CC lib/ftl/ftl_init.o 00:09:12.282 CC lib/nvmf/subsystem.o 00:09:12.282 CC lib/scsi/dev.o 00:09:12.847 CC lib/scsi/lun.o 00:09:12.847 CC lib/scsi/port.o 00:09:12.847 CC lib/scsi/scsi.o 00:09:13.105 CC lib/scsi/scsi_bdev.o 00:09:13.105 CC lib/scsi/scsi_pr.o 00:09:13.105 CC lib/scsi/scsi_rpc.o 00:09:13.364 CC lib/ftl/ftl_layout.o 00:09:13.364 LIB libspdk_nbd.a 00:09:13.364 SO libspdk_nbd.so.7.0 00:09:13.365 CC lib/ublk/ublk_rpc.o 00:09:13.365 CC lib/scsi/task.o 00:09:13.365 SYMLINK libspdk_nbd.so 00:09:13.624 CC lib/nvmf/nvmf.o 00:09:13.624 CC lib/ftl/ftl_debug.o 00:09:13.624 LIB libspdk_ublk.a 00:09:13.624 CC lib/nvmf/nvmf_rpc.o 00:09:13.881 CC lib/ftl/ftl_io.o 00:09:13.881 SO libspdk_ublk.so.3.0 00:09:13.881 LIB libspdk_scsi.a 00:09:13.882 SYMLINK libspdk_ublk.so 00:09:13.882 CC lib/nvmf/transport.o 00:09:13.882 CC lib/nvmf/tcp.o 00:09:13.882 SO libspdk_scsi.so.9.0 00:09:13.882 CC lib/ftl/ftl_sb.o 00:09:13.882 CC lib/ftl/ftl_l2p.o 00:09:14.139 SYMLINK libspdk_scsi.so 00:09:14.140 CC lib/nvmf/stubs.o 00:09:14.140 CC lib/nvmf/mdns_server.o 00:09:14.140 CC lib/nvmf/rdma.o 00:09:14.140 CC lib/ftl/ftl_l2p_flat.o 00:09:14.710 CC lib/ftl/ftl_nv_cache.o 00:09:14.710 CC lib/nvmf/auth.o 00:09:14.710 CC lib/ftl/ftl_band.o 00:09:14.710 CC lib/ftl/ftl_band_ops.o 00:09:14.710 CC lib/ftl/ftl_writer.o 00:09:14.968 CC lib/ftl/ftl_rq.o 00:09:14.968 CC lib/ftl/ftl_reloc.o 00:09:14.968 CC lib/ftl/ftl_l2p_cache.o 00:09:15.226 CC lib/ftl/ftl_p2l.o 00:09:15.226 CC lib/ftl/ftl_p2l_log.o 00:09:15.511 CC lib/ftl/mngt/ftl_mngt.o 00:09:15.511 CC lib/iscsi/conn.o 00:09:15.511 CC lib/vhost/vhost.o 00:09:15.775 CC lib/vhost/vhost_rpc.o 00:09:15.775 CC lib/iscsi/init_grp.o 00:09:16.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:16.033 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:16.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:16.292 CC lib/vhost/vhost_scsi.o 00:09:16.292 CC lib/vhost/vhost_blk.o 00:09:16.292 CC lib/vhost/rte_vhost_user.o 00:09:16.292 CC lib/iscsi/iscsi.o 00:09:16.292 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:16.550 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:16.808 CC lib/iscsi/param.o 00:09:17.067 CC lib/iscsi/portal_grp.o 00:09:17.067 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:17.067 CC lib/iscsi/tgt_node.o 00:09:17.324 CC lib/iscsi/iscsi_subsystem.o 00:09:17.582 CC lib/iscsi/iscsi_rpc.o 00:09:17.582 LIB libspdk_nvmf.a 00:09:17.582 CC lib/iscsi/task.o 00:09:17.582 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:17.582 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:17.582 SO libspdk_nvmf.so.20.0 00:09:17.840 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:17.840 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:17.840 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:18.099 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:18.099 SYMLINK libspdk_nvmf.so 00:09:18.099 CC lib/ftl/utils/ftl_conf.o 00:09:18.099 CC lib/ftl/utils/ftl_md.o 00:09:18.099 CC lib/ftl/utils/ftl_mempool.o 00:09:18.099 CC lib/ftl/utils/ftl_bitmap.o 00:09:18.357 CC lib/ftl/utils/ftl_property.o 00:09:18.357 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:18.357 LIB libspdk_vhost.a 00:09:18.357 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:18.357 SO libspdk_vhost.so.8.0 00:09:18.357 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:18.357 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:18.614 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:18.614 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:18.614 SYMLINK libspdk_vhost.so 00:09:18.614 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:18.614 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:18.614 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:18.614 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:18.872 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:18.872 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:18.872 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:18.872 CC lib/ftl/base/ftl_base_dev.o 00:09:18.872 CC lib/ftl/base/ftl_base_bdev.o 00:09:18.872 CC lib/ftl/ftl_trace.o 00:09:19.131 LIB libspdk_iscsi.a 00:09:19.131 LIB libspdk_ftl.a 00:09:19.388 SO libspdk_iscsi.so.8.0 00:09:19.388 SYMLINK libspdk_iscsi.so 00:09:19.645 SO libspdk_ftl.so.9.0 00:09:19.903 SYMLINK libspdk_ftl.so 00:09:20.161 CC module/env_dpdk/env_dpdk_rpc.o 00:09:20.420 CC module/accel/ioat/accel_ioat.o 00:09:20.420 CC module/fsdev/aio/fsdev_aio.o 00:09:20.420 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:20.420 CC module/accel/error/accel_error.o 00:09:20.420 CC module/blob/bdev/blob_bdev.o 00:09:20.420 CC module/scheduler/gscheduler/gscheduler.o 00:09:20.420 CC module/keyring/file/keyring.o 00:09:20.420 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:20.420 CC module/sock/posix/posix.o 00:09:20.420 LIB libspdk_env_dpdk_rpc.a 00:09:20.420 SO libspdk_env_dpdk_rpc.so.6.0 00:09:20.420 SYMLINK libspdk_env_dpdk_rpc.so 00:09:20.420 CC module/accel/ioat/accel_ioat_rpc.o 00:09:20.420 LIB libspdk_scheduler_gscheduler.a 00:09:20.420 LIB libspdk_scheduler_dpdk_governor.a 00:09:20.679 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:20.679 SO libspdk_scheduler_gscheduler.so.4.0 00:09:20.679 CC module/accel/error/accel_error_rpc.o 00:09:20.679 CC module/keyring/file/keyring_rpc.o 00:09:20.679 SYMLINK libspdk_scheduler_gscheduler.so 00:09:20.679 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:20.679 LIB libspdk_accel_ioat.a 00:09:20.679 LIB libspdk_scheduler_dynamic.a 00:09:20.679 SO libspdk_accel_ioat.so.6.0 00:09:20.679 LIB libspdk_accel_error.a 00:09:20.679 SO libspdk_scheduler_dynamic.so.4.0 00:09:20.938 LIB libspdk_blob_bdev.a 00:09:20.938 SO libspdk_accel_error.so.2.0 00:09:20.938 LIB libspdk_keyring_file.a 00:09:20.938 SO libspdk_blob_bdev.so.11.0 00:09:20.938 SYMLINK libspdk_accel_ioat.so 00:09:20.938 SYMLINK libspdk_scheduler_dynamic.so 00:09:20.938 SO libspdk_keyring_file.so.2.0 00:09:20.938 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:20.938 SYMLINK libspdk_accel_error.so 00:09:20.938 CC module/fsdev/aio/linux_aio_mgr.o 00:09:20.938 CC module/accel/dsa/accel_dsa.o 00:09:20.938 CC module/accel/dsa/accel_dsa_rpc.o 00:09:20.938 SYMLINK libspdk_blob_bdev.so 00:09:20.938 CC module/keyring/linux/keyring.o 00:09:20.938 SYMLINK libspdk_keyring_file.so 00:09:20.938 CC module/keyring/linux/keyring_rpc.o 00:09:20.938 CC module/accel/iaa/accel_iaa.o 00:09:21.248 CC module/accel/iaa/accel_iaa_rpc.o 00:09:21.248 LIB libspdk_keyring_linux.a 00:09:21.248 SO libspdk_keyring_linux.so.1.0 00:09:21.248 CC module/bdev/delay/vbdev_delay.o 00:09:21.248 SYMLINK libspdk_keyring_linux.so 00:09:21.248 CC module/bdev/error/vbdev_error.o 00:09:21.248 LIB libspdk_accel_dsa.a 00:09:21.248 CC module/bdev/gpt/gpt.o 00:09:21.248 LIB libspdk_fsdev_aio.a 00:09:21.248 LIB libspdk_accel_iaa.a 00:09:21.248 SO libspdk_accel_dsa.so.5.0 00:09:21.248 CC module/bdev/lvol/vbdev_lvol.o 00:09:21.248 SO libspdk_accel_iaa.so.3.0 00:09:21.248 SO libspdk_fsdev_aio.so.1.0 00:09:21.506 SYMLINK libspdk_accel_dsa.so 00:09:21.506 CC module/bdev/gpt/vbdev_gpt.o 00:09:21.506 CC module/bdev/malloc/bdev_malloc.o 00:09:21.506 SYMLINK libspdk_accel_iaa.so 00:09:21.506 CC module/bdev/error/vbdev_error_rpc.o 00:09:21.506 SYMLINK libspdk_fsdev_aio.so 00:09:21.506 LIB libspdk_sock_posix.a 00:09:21.506 SO libspdk_sock_posix.so.6.0 00:09:21.506 CC module/blobfs/bdev/blobfs_bdev.o 00:09:21.506 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:21.506 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:21.764 SYMLINK libspdk_sock_posix.so 00:09:21.764 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:21.764 LIB libspdk_bdev_error.a 00:09:21.764 CC module/bdev/null/bdev_null.o 00:09:21.764 LIB libspdk_bdev_gpt.a 00:09:21.764 SO libspdk_bdev_error.so.6.0 00:09:21.764 LIB libspdk_blobfs_bdev.a 00:09:21.764 SO libspdk_bdev_gpt.so.6.0 00:09:21.764 SO libspdk_blobfs_bdev.so.6.0 00:09:22.022 SYMLINK libspdk_bdev_error.so 00:09:22.022 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:22.022 SYMLINK libspdk_bdev_gpt.so 00:09:22.022 CC module/bdev/nvme/bdev_nvme.o 00:09:22.022 LIB libspdk_bdev_delay.a 00:09:22.022 SYMLINK libspdk_blobfs_bdev.so 00:09:22.022 CC module/bdev/null/bdev_null_rpc.o 00:09:22.022 CC module/bdev/passthru/vbdev_passthru.o 00:09:22.022 SO libspdk_bdev_delay.so.6.0 00:09:22.022 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:22.022 SYMLINK libspdk_bdev_delay.so 00:09:22.022 LIB libspdk_bdev_malloc.a 00:09:22.022 CC module/bdev/split/vbdev_split.o 00:09:22.022 CC module/bdev/raid/bdev_raid.o 00:09:22.022 LIB libspdk_bdev_lvol.a 00:09:22.280 SO libspdk_bdev_malloc.so.6.0 00:09:22.280 CC module/bdev/raid/bdev_raid_rpc.o 00:09:22.280 SO libspdk_bdev_lvol.so.6.0 00:09:22.280 LIB libspdk_bdev_null.a 00:09:22.280 SO libspdk_bdev_null.so.6.0 00:09:22.280 SYMLINK libspdk_bdev_malloc.so 00:09:22.280 CC module/bdev/raid/bdev_raid_sb.o 00:09:22.280 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:22.280 SYMLINK libspdk_bdev_lvol.so 00:09:22.280 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:22.280 CC module/bdev/raid/raid0.o 00:09:22.280 SYMLINK libspdk_bdev_null.so 00:09:22.280 CC module/bdev/raid/raid1.o 00:09:22.280 LIB libspdk_bdev_passthru.a 00:09:22.280 SO libspdk_bdev_passthru.so.6.0 00:09:22.537 CC module/bdev/split/vbdev_split_rpc.o 00:09:22.537 SYMLINK libspdk_bdev_passthru.so 00:09:22.537 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:22.537 CC module/bdev/raid/concat.o 00:09:22.537 CC module/bdev/xnvme/bdev_xnvme.o 00:09:22.537 LIB libspdk_bdev_split.a 00:09:22.537 CC module/bdev/aio/bdev_aio.o 00:09:22.796 CC module/bdev/nvme/nvme_rpc.o 00:09:22.796 SO libspdk_bdev_split.so.6.0 00:09:22.796 LIB libspdk_bdev_zone_block.a 00:09:22.796 SO libspdk_bdev_zone_block.so.6.0 00:09:22.796 SYMLINK libspdk_bdev_split.so 00:09:22.796 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:09:22.796 CC module/bdev/ftl/bdev_ftl.o 00:09:22.796 SYMLINK libspdk_bdev_zone_block.so 00:09:22.796 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:23.054 CC module/bdev/nvme/bdev_mdns_client.o 00:09:23.054 CC module/bdev/aio/bdev_aio_rpc.o 00:09:23.054 LIB libspdk_bdev_aio.a 00:09:23.311 CC module/bdev/nvme/vbdev_opal.o 00:09:23.311 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:23.311 LIB libspdk_bdev_xnvme.a 00:09:23.311 SO libspdk_bdev_aio.so.6.0 00:09:23.311 CC module/bdev/iscsi/bdev_iscsi.o 00:09:23.311 SO libspdk_bdev_xnvme.so.3.0 00:09:23.311 LIB libspdk_bdev_ftl.a 00:09:23.311 SYMLINK libspdk_bdev_aio.so 00:09:23.311 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:23.311 SO libspdk_bdev_ftl.so.6.0 00:09:23.311 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:23.311 SYMLINK libspdk_bdev_xnvme.so 00:09:23.311 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:23.311 SYMLINK libspdk_bdev_ftl.so 00:09:23.311 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:23.569 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:23.569 LIB libspdk_bdev_raid.a 00:09:23.569 SO libspdk_bdev_raid.so.6.0 00:09:23.827 SYMLINK libspdk_bdev_raid.so 00:09:23.827 LIB libspdk_bdev_iscsi.a 00:09:24.085 SO libspdk_bdev_iscsi.so.6.0 00:09:24.085 SYMLINK libspdk_bdev_iscsi.so 00:09:24.344 LIB libspdk_bdev_virtio.a 00:09:24.602 SO libspdk_bdev_virtio.so.6.0 00:09:24.602 SYMLINK libspdk_bdev_virtio.so 00:09:26.503 LIB libspdk_bdev_nvme.a 00:09:26.503 SO libspdk_bdev_nvme.so.7.1 00:09:26.503 SYMLINK libspdk_bdev_nvme.so 00:09:26.761 CC module/event/subsystems/iobuf/iobuf.o 00:09:26.761 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:26.761 CC module/event/subsystems/fsdev/fsdev.o 00:09:26.761 CC module/event/subsystems/sock/sock.o 00:09:26.761 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:27.019 CC module/event/subsystems/keyring/keyring.o 00:09:27.019 CC module/event/subsystems/vmd/vmd.o 00:09:27.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:27.019 CC module/event/subsystems/scheduler/scheduler.o 00:09:27.019 LIB libspdk_event_keyring.a 00:09:27.019 LIB libspdk_event_sock.a 00:09:27.019 SO libspdk_event_keyring.so.1.0 00:09:27.019 SO libspdk_event_sock.so.5.0 00:09:27.019 LIB libspdk_event_fsdev.a 00:09:27.019 LIB libspdk_event_vhost_blk.a 00:09:27.019 LIB libspdk_event_scheduler.a 00:09:27.019 LIB libspdk_event_iobuf.a 00:09:27.278 SO libspdk_event_fsdev.so.1.0 00:09:27.278 LIB libspdk_event_vmd.a 00:09:27.278 SO libspdk_event_scheduler.so.4.0 00:09:27.278 SO libspdk_event_vhost_blk.so.3.0 00:09:27.278 SYMLINK libspdk_event_keyring.so 00:09:27.278 SO libspdk_event_iobuf.so.3.0 00:09:27.278 SO libspdk_event_vmd.so.6.0 00:09:27.278 SYMLINK libspdk_event_sock.so 00:09:27.278 SYMLINK libspdk_event_vhost_blk.so 00:09:27.278 SYMLINK libspdk_event_scheduler.so 00:09:27.278 SYMLINK libspdk_event_fsdev.so 00:09:27.278 SYMLINK libspdk_event_vmd.so 00:09:27.278 SYMLINK libspdk_event_iobuf.so 00:09:27.542 CC module/event/subsystems/accel/accel.o 00:09:27.800 LIB libspdk_event_accel.a 00:09:27.800 SO libspdk_event_accel.so.6.0 00:09:27.800 SYMLINK libspdk_event_accel.so 00:09:28.119 CC module/event/subsystems/bdev/bdev.o 00:09:28.377 LIB libspdk_event_bdev.a 00:09:28.377 SO libspdk_event_bdev.so.6.0 00:09:28.377 SYMLINK libspdk_event_bdev.so 00:09:28.636 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:28.636 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:28.636 CC module/event/subsystems/nbd/nbd.o 00:09:28.636 CC module/event/subsystems/scsi/scsi.o 00:09:28.636 CC module/event/subsystems/ublk/ublk.o 00:09:28.894 LIB libspdk_event_nbd.a 00:09:28.894 SO libspdk_event_nbd.so.6.0 00:09:28.894 LIB libspdk_event_scsi.a 00:09:28.894 LIB libspdk_event_ublk.a 00:09:28.894 SO libspdk_event_scsi.so.6.0 00:09:28.894 SYMLINK libspdk_event_nbd.so 00:09:28.894 SO libspdk_event_ublk.so.3.0 00:09:28.894 LIB libspdk_event_nvmf.a 00:09:28.894 SO libspdk_event_nvmf.so.6.0 00:09:28.894 SYMLINK libspdk_event_ublk.so 00:09:28.894 SYMLINK libspdk_event_scsi.so 00:09:28.894 SYMLINK libspdk_event_nvmf.so 00:09:29.152 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:29.152 CC module/event/subsystems/iscsi/iscsi.o 00:09:29.411 LIB libspdk_event_iscsi.a 00:09:29.411 LIB libspdk_event_vhost_scsi.a 00:09:29.411 SO libspdk_event_iscsi.so.6.0 00:09:29.411 SO libspdk_event_vhost_scsi.so.3.0 00:09:29.411 SYMLINK libspdk_event_vhost_scsi.so 00:09:29.411 SYMLINK libspdk_event_iscsi.so 00:09:29.669 SO libspdk.so.6.0 00:09:29.669 SYMLINK libspdk.so 00:09:29.928 CXX app/trace/trace.o 00:09:29.928 CC app/spdk_lspci/spdk_lspci.o 00:09:29.928 CC app/trace_record/trace_record.o 00:09:29.928 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:29.928 CC app/iscsi_tgt/iscsi_tgt.o 00:09:29.928 CC examples/ioat/perf/perf.o 00:09:29.928 CC app/nvmf_tgt/nvmf_main.o 00:09:29.928 CC test/thread/poller_perf/poller_perf.o 00:09:29.928 CC app/spdk_tgt/spdk_tgt.o 00:09:29.928 CC examples/util/zipf/zipf.o 00:09:30.185 LINK spdk_lspci 00:09:30.185 LINK poller_perf 00:09:30.185 LINK iscsi_tgt 00:09:30.185 LINK interrupt_tgt 00:09:30.185 LINK spdk_trace_record 00:09:30.185 LINK nvmf_tgt 00:09:30.443 LINK spdk_tgt 00:09:30.443 LINK ioat_perf 00:09:30.443 LINK zipf 00:09:30.443 CC examples/ioat/verify/verify.o 00:09:30.443 LINK spdk_trace 00:09:30.702 CC test/dma/test_dma/test_dma.o 00:09:30.702 CC test/app/bdev_svc/bdev_svc.o 00:09:30.702 TEST_HEADER include/spdk/accel.h 00:09:30.702 TEST_HEADER include/spdk/accel_module.h 00:09:30.702 TEST_HEADER include/spdk/assert.h 00:09:30.702 TEST_HEADER include/spdk/barrier.h 00:09:30.702 TEST_HEADER include/spdk/base64.h 00:09:30.702 TEST_HEADER include/spdk/bdev.h 00:09:30.702 TEST_HEADER include/spdk/bdev_module.h 00:09:30.702 TEST_HEADER include/spdk/bdev_zone.h 00:09:30.702 TEST_HEADER include/spdk/bit_array.h 00:09:30.702 TEST_HEADER include/spdk/bit_pool.h 00:09:30.702 TEST_HEADER include/spdk/blob_bdev.h 00:09:30.702 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:30.702 TEST_HEADER include/spdk/blobfs.h 00:09:30.702 TEST_HEADER include/spdk/blob.h 00:09:30.702 TEST_HEADER include/spdk/conf.h 00:09:30.702 TEST_HEADER include/spdk/config.h 00:09:30.702 TEST_HEADER include/spdk/cpuset.h 00:09:30.702 TEST_HEADER include/spdk/crc16.h 00:09:30.702 TEST_HEADER include/spdk/crc32.h 00:09:30.702 TEST_HEADER include/spdk/crc64.h 00:09:30.702 TEST_HEADER include/spdk/dif.h 00:09:30.702 TEST_HEADER include/spdk/dma.h 00:09:30.702 TEST_HEADER include/spdk/endian.h 00:09:30.702 TEST_HEADER include/spdk/env_dpdk.h 00:09:30.702 TEST_HEADER include/spdk/env.h 00:09:30.702 TEST_HEADER include/spdk/event.h 00:09:30.702 TEST_HEADER include/spdk/fd_group.h 00:09:30.702 TEST_HEADER include/spdk/fd.h 00:09:30.702 TEST_HEADER include/spdk/file.h 00:09:30.702 TEST_HEADER include/spdk/fsdev.h 00:09:30.702 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:30.702 TEST_HEADER include/spdk/fsdev_module.h 00:09:30.702 TEST_HEADER include/spdk/ftl.h 00:09:30.702 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:30.702 TEST_HEADER include/spdk/gpt_spec.h 00:09:30.702 TEST_HEADER include/spdk/hexlify.h 00:09:30.702 TEST_HEADER include/spdk/histogram_data.h 00:09:30.702 TEST_HEADER include/spdk/idxd.h 00:09:30.702 TEST_HEADER include/spdk/idxd_spec.h 00:09:30.702 TEST_HEADER include/spdk/init.h 00:09:30.702 TEST_HEADER include/spdk/ioat.h 00:09:30.702 TEST_HEADER include/spdk/ioat_spec.h 00:09:30.702 TEST_HEADER include/spdk/iscsi_spec.h 00:09:30.702 TEST_HEADER include/spdk/json.h 00:09:30.702 TEST_HEADER include/spdk/jsonrpc.h 00:09:30.702 TEST_HEADER include/spdk/keyring.h 00:09:30.702 CC test/env/vtophys/vtophys.o 00:09:30.702 TEST_HEADER include/spdk/keyring_module.h 00:09:30.702 CC test/event/event_perf/event_perf.o 00:09:30.702 TEST_HEADER include/spdk/likely.h 00:09:30.702 TEST_HEADER include/spdk/log.h 00:09:30.702 TEST_HEADER include/spdk/lvol.h 00:09:30.702 TEST_HEADER include/spdk/md5.h 00:09:30.962 TEST_HEADER include/spdk/memory.h 00:09:30.962 TEST_HEADER include/spdk/mmio.h 00:09:30.962 TEST_HEADER include/spdk/nbd.h 00:09:30.962 TEST_HEADER include/spdk/net.h 00:09:30.962 TEST_HEADER include/spdk/notify.h 00:09:30.962 TEST_HEADER include/spdk/nvme.h 00:09:30.962 TEST_HEADER include/spdk/nvme_intel.h 00:09:30.962 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:30.962 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:30.962 TEST_HEADER include/spdk/nvme_spec.h 00:09:30.962 TEST_HEADER include/spdk/nvme_zns.h 00:09:30.962 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:30.962 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:30.962 TEST_HEADER include/spdk/nvmf.h 00:09:30.962 TEST_HEADER include/spdk/nvmf_spec.h 00:09:30.962 TEST_HEADER include/spdk/nvmf_transport.h 00:09:30.962 TEST_HEADER include/spdk/opal.h 00:09:30.962 CC examples/thread/thread/thread_ex.o 00:09:30.962 TEST_HEADER include/spdk/opal_spec.h 00:09:30.962 TEST_HEADER include/spdk/pci_ids.h 00:09:30.962 TEST_HEADER include/spdk/pipe.h 00:09:30.962 TEST_HEADER include/spdk/queue.h 00:09:30.962 TEST_HEADER include/spdk/reduce.h 00:09:30.962 TEST_HEADER include/spdk/rpc.h 00:09:30.962 TEST_HEADER include/spdk/scheduler.h 00:09:30.962 CC app/spdk_nvme_perf/perf.o 00:09:30.962 TEST_HEADER include/spdk/scsi.h 00:09:30.962 TEST_HEADER include/spdk/scsi_spec.h 00:09:30.962 TEST_HEADER include/spdk/sock.h 00:09:30.962 TEST_HEADER include/spdk/stdinc.h 00:09:30.962 TEST_HEADER include/spdk/string.h 00:09:30.962 TEST_HEADER include/spdk/thread.h 00:09:30.962 TEST_HEADER include/spdk/trace.h 00:09:30.962 TEST_HEADER include/spdk/trace_parser.h 00:09:30.962 TEST_HEADER include/spdk/tree.h 00:09:30.962 TEST_HEADER include/spdk/ublk.h 00:09:30.962 CC test/env/mem_callbacks/mem_callbacks.o 00:09:30.962 TEST_HEADER include/spdk/util.h 00:09:30.962 TEST_HEADER include/spdk/uuid.h 00:09:30.962 TEST_HEADER include/spdk/version.h 00:09:30.962 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:30.962 LINK verify 00:09:30.962 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:30.962 TEST_HEADER include/spdk/vhost.h 00:09:30.962 TEST_HEADER include/spdk/vmd.h 00:09:30.962 TEST_HEADER include/spdk/xor.h 00:09:30.962 TEST_HEADER include/spdk/zipf.h 00:09:30.962 CXX test/cpp_headers/accel.o 00:09:30.962 LINK bdev_svc 00:09:30.962 LINK event_perf 00:09:30.962 LINK vtophys 00:09:31.220 CXX test/cpp_headers/accel_module.o 00:09:31.220 LINK thread 00:09:31.220 CXX test/cpp_headers/assert.o 00:09:31.220 CC test/event/reactor/reactor.o 00:09:31.220 LINK test_dma 00:09:31.478 CC test/event/reactor_perf/reactor_perf.o 00:09:31.478 LINK nvme_fuzz 00:09:31.478 CC test/event/app_repeat/app_repeat.o 00:09:31.478 CXX test/cpp_headers/barrier.o 00:09:31.478 LINK reactor 00:09:31.478 LINK reactor_perf 00:09:31.478 CC test/event/scheduler/scheduler.o 00:09:31.736 LINK app_repeat 00:09:31.736 CC examples/sock/hello_world/hello_sock.o 00:09:31.736 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:31.736 CXX test/cpp_headers/base64.o 00:09:31.995 LINK scheduler 00:09:31.995 CC examples/vmd/lsvmd/lsvmd.o 00:09:31.995 CC examples/vmd/led/led.o 00:09:31.995 CXX test/cpp_headers/bdev.o 00:09:31.995 LINK hello_sock 00:09:31.995 CC examples/idxd/perf/perf.o 00:09:31.995 LINK spdk_nvme_perf 00:09:31.995 LINK mem_callbacks 00:09:31.995 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:32.252 LINK led 00:09:32.252 CXX test/cpp_headers/bdev_module.o 00:09:32.252 LINK lsvmd 00:09:32.252 CXX test/cpp_headers/bdev_zone.o 00:09:32.252 CC app/spdk_nvme_identify/identify.o 00:09:32.510 CC test/app/histogram_perf/histogram_perf.o 00:09:32.510 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:32.510 CC test/app/jsoncat/jsoncat.o 00:09:32.510 LINK hello_fsdev 00:09:32.510 CC test/env/memory/memory_ut.o 00:09:32.510 CC test/env/pci/pci_ut.o 00:09:32.510 LINK histogram_perf 00:09:32.510 CXX test/cpp_headers/bit_array.o 00:09:32.510 LINK jsoncat 00:09:32.510 LINK env_dpdk_post_init 00:09:32.768 CXX test/cpp_headers/bit_pool.o 00:09:32.768 LINK idxd_perf 00:09:33.026 CC test/app/stub/stub.o 00:09:33.026 CXX test/cpp_headers/blob_bdev.o 00:09:33.026 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:33.026 CC examples/accel/perf/accel_perf.o 00:09:33.284 LINK stub 00:09:33.284 CXX test/cpp_headers/blobfs_bdev.o 00:09:33.284 CC examples/blob/cli/blobcli.o 00:09:33.284 CC examples/blob/hello_world/hello_blob.o 00:09:33.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:33.284 CXX test/cpp_headers/blobfs.o 00:09:33.542 LINK pci_ut 00:09:33.542 LINK spdk_nvme_identify 00:09:33.542 CXX test/cpp_headers/blob.o 00:09:33.542 CC examples/nvme/hello_world/hello_world.o 00:09:33.801 LINK hello_blob 00:09:33.801 CC app/spdk_nvme_discover/discovery_aer.o 00:09:33.801 CXX test/cpp_headers/conf.o 00:09:33.801 CC examples/nvme/reconnect/reconnect.o 00:09:33.801 LINK vhost_fuzz 00:09:33.801 LINK hello_world 00:09:33.801 LINK accel_perf 00:09:34.059 CXX test/cpp_headers/config.o 00:09:34.059 LINK memory_ut 00:09:34.059 CXX test/cpp_headers/cpuset.o 00:09:34.059 LINK spdk_nvme_discover 00:09:34.059 CC test/rpc_client/rpc_client_test.o 00:09:34.059 CXX test/cpp_headers/crc16.o 00:09:34.059 LINK blobcli 00:09:34.059 CXX test/cpp_headers/crc32.o 00:09:34.059 CXX test/cpp_headers/crc64.o 00:09:34.317 LINK iscsi_fuzz 00:09:34.317 LINK rpc_client_test 00:09:34.317 CXX test/cpp_headers/dif.o 00:09:34.317 LINK reconnect 00:09:34.317 CC app/spdk_top/spdk_top.o 00:09:34.577 CC test/blobfs/mkfs/mkfs.o 00:09:34.577 CC test/nvme/aer/aer.o 00:09:34.577 CXX test/cpp_headers/dma.o 00:09:34.577 CC test/accel/dif/dif.o 00:09:34.577 CXX test/cpp_headers/endian.o 00:09:34.577 CXX test/cpp_headers/env_dpdk.o 00:09:34.577 CC test/lvol/esnap/esnap.o 00:09:34.577 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:34.577 CC examples/bdev/hello_world/hello_bdev.o 00:09:34.577 LINK mkfs 00:09:34.835 CXX test/cpp_headers/env.o 00:09:34.835 CC examples/nvme/arbitration/arbitration.o 00:09:34.835 CC examples/nvme/hotplug/hotplug.o 00:09:34.835 LINK aer 00:09:34.835 CXX test/cpp_headers/event.o 00:09:34.835 LINK hello_bdev 00:09:35.094 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:35.094 LINK hotplug 00:09:35.094 CC test/nvme/reset/reset.o 00:09:35.094 CXX test/cpp_headers/fd_group.o 00:09:35.094 LINK arbitration 00:09:35.094 LINK cmb_copy 00:09:35.351 CC examples/bdev/bdevperf/bdevperf.o 00:09:35.351 LINK nvme_manage 00:09:35.351 CXX test/cpp_headers/fd.o 00:09:35.351 CC examples/nvme/abort/abort.o 00:09:35.351 CXX test/cpp_headers/file.o 00:09:35.351 LINK dif 00:09:35.351 LINK reset 00:09:35.651 LINK spdk_top 00:09:35.651 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:35.651 CXX test/cpp_headers/fsdev.o 00:09:35.651 CC app/vhost/vhost.o 00:09:35.651 CC app/spdk_dd/spdk_dd.o 00:09:35.651 CC test/nvme/sgl/sgl.o 00:09:35.651 LINK pmr_persistence 00:09:35.651 CC test/nvme/e2edp/nvme_dp.o 00:09:35.651 CXX test/cpp_headers/fsdev_module.o 00:09:35.909 LINK abort 00:09:35.909 LINK vhost 00:09:35.909 CC test/bdev/bdevio/bdevio.o 00:09:35.909 CXX test/cpp_headers/ftl.o 00:09:35.909 CXX test/cpp_headers/fuse_dispatcher.o 00:09:35.909 CC test/nvme/overhead/overhead.o 00:09:36.168 CXX test/cpp_headers/gpt_spec.o 00:09:36.168 LINK sgl 00:09:36.168 LINK nvme_dp 00:09:36.168 LINK spdk_dd 00:09:36.168 CXX test/cpp_headers/hexlify.o 00:09:36.426 CC test/nvme/err_injection/err_injection.o 00:09:36.426 LINK overhead 00:09:36.426 LINK bdevperf 00:09:36.426 CC test/nvme/startup/startup.o 00:09:36.426 CC app/fio/nvme/fio_plugin.o 00:09:36.426 CC test/nvme/reserve/reserve.o 00:09:36.426 CXX test/cpp_headers/histogram_data.o 00:09:36.426 LINK bdevio 00:09:36.426 CC test/nvme/simple_copy/simple_copy.o 00:09:36.684 LINK err_injection 00:09:36.684 LINK startup 00:09:36.684 CXX test/cpp_headers/idxd.o 00:09:36.684 CC test/nvme/connect_stress/connect_stress.o 00:09:36.684 LINK reserve 00:09:36.942 CC test/nvme/boot_partition/boot_partition.o 00:09:36.942 LINK simple_copy 00:09:36.942 CC examples/nvmf/nvmf/nvmf.o 00:09:36.942 CC test/nvme/compliance/nvme_compliance.o 00:09:36.942 CC test/nvme/fused_ordering/fused_ordering.o 00:09:36.942 CXX test/cpp_headers/idxd_spec.o 00:09:36.942 LINK connect_stress 00:09:36.942 CXX test/cpp_headers/init.o 00:09:36.942 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:36.942 LINK boot_partition 00:09:36.942 CXX test/cpp_headers/ioat.o 00:09:37.199 LINK fused_ordering 00:09:37.199 CXX test/cpp_headers/ioat_spec.o 00:09:37.199 LINK spdk_nvme 00:09:37.199 CC app/fio/bdev/fio_plugin.o 00:09:37.199 LINK nvmf 00:09:37.199 LINK doorbell_aers 00:09:37.199 CXX test/cpp_headers/iscsi_spec.o 00:09:37.199 CXX test/cpp_headers/json.o 00:09:37.199 CC test/nvme/fdp/fdp.o 00:09:37.199 LINK nvme_compliance 00:09:37.458 CXX test/cpp_headers/jsonrpc.o 00:09:37.458 CC test/nvme/cuse/cuse.o 00:09:37.458 CXX test/cpp_headers/keyring.o 00:09:37.458 CXX test/cpp_headers/keyring_module.o 00:09:37.458 CXX test/cpp_headers/likely.o 00:09:37.458 CXX test/cpp_headers/log.o 00:09:37.458 CXX test/cpp_headers/lvol.o 00:09:37.458 CXX test/cpp_headers/md5.o 00:09:37.716 CXX test/cpp_headers/memory.o 00:09:37.716 CXX test/cpp_headers/mmio.o 00:09:37.716 CXX test/cpp_headers/nbd.o 00:09:37.716 CXX test/cpp_headers/net.o 00:09:37.716 CXX test/cpp_headers/notify.o 00:09:37.716 CXX test/cpp_headers/nvme.o 00:09:37.716 LINK fdp 00:09:37.716 CXX test/cpp_headers/nvme_intel.o 00:09:37.716 CXX test/cpp_headers/nvme_ocssd.o 00:09:37.974 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:37.974 LINK spdk_bdev 00:09:37.974 CXX test/cpp_headers/nvme_spec.o 00:09:37.974 CXX test/cpp_headers/nvme_zns.o 00:09:37.974 CXX test/cpp_headers/nvmf_cmd.o 00:09:37.974 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:37.974 CXX test/cpp_headers/nvmf.o 00:09:37.974 CXX test/cpp_headers/nvmf_spec.o 00:09:37.974 CXX test/cpp_headers/nvmf_transport.o 00:09:37.974 CXX test/cpp_headers/opal.o 00:09:37.974 CXX test/cpp_headers/opal_spec.o 00:09:38.233 CXX test/cpp_headers/pci_ids.o 00:09:38.233 CXX test/cpp_headers/pipe.o 00:09:38.233 CXX test/cpp_headers/queue.o 00:09:38.233 CXX test/cpp_headers/reduce.o 00:09:38.233 CXX test/cpp_headers/rpc.o 00:09:38.233 CXX test/cpp_headers/scheduler.o 00:09:38.233 CXX test/cpp_headers/scsi.o 00:09:38.233 CXX test/cpp_headers/scsi_spec.o 00:09:38.233 CXX test/cpp_headers/sock.o 00:09:38.492 CXX test/cpp_headers/stdinc.o 00:09:38.492 CXX test/cpp_headers/string.o 00:09:38.492 CXX test/cpp_headers/thread.o 00:09:38.492 CXX test/cpp_headers/trace.o 00:09:38.492 CXX test/cpp_headers/trace_parser.o 00:09:38.492 CXX test/cpp_headers/tree.o 00:09:38.492 CXX test/cpp_headers/ublk.o 00:09:38.492 CXX test/cpp_headers/util.o 00:09:38.492 CXX test/cpp_headers/uuid.o 00:09:38.492 CXX test/cpp_headers/version.o 00:09:38.492 CXX test/cpp_headers/vfio_user_pci.o 00:09:38.492 CXX test/cpp_headers/vfio_user_spec.o 00:09:38.492 CXX test/cpp_headers/vhost.o 00:09:38.492 CXX test/cpp_headers/vmd.o 00:09:38.750 CXX test/cpp_headers/xor.o 00:09:38.750 CXX test/cpp_headers/zipf.o 00:09:39.009 LINK cuse 00:09:42.313 LINK esnap 00:09:42.313 00:09:42.313 real 2m15.989s 00:09:42.313 user 13m30.523s 00:09:42.313 sys 2m10.304s 00:09:42.313 13:29:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:42.313 13:29:34 make -- common/autotest_common.sh@10 -- $ set +x 00:09:42.313 ************************************ 00:09:42.313 END TEST make 00:09:42.313 ************************************ 00:09:42.313 13:29:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:42.313 13:29:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:42.313 13:29:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:42.313 13:29:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:42.572 13:29:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:42.572 13:29:34 -- pm/common@44 -- $ pid=5336 00:09:42.572 13:29:34 -- pm/common@50 -- $ kill -TERM 5336 00:09:42.572 13:29:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:42.572 13:29:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:42.572 13:29:34 -- pm/common@44 -- $ pid=5338 00:09:42.572 13:29:34 -- pm/common@50 -- $ kill -TERM 5338 00:09:42.572 13:29:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:42.572 13:29:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:42.572 13:29:34 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.572 13:29:34 -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.572 13:29:34 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.572 13:29:34 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.572 13:29:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.572 13:29:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.572 13:29:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.572 13:29:34 -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.572 13:29:34 -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.572 13:29:34 -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.572 13:29:34 -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.572 13:29:34 -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.572 13:29:34 -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.572 13:29:34 -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.572 13:29:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.572 13:29:34 -- scripts/common.sh@344 -- # case "$op" in 00:09:42.572 13:29:34 -- scripts/common.sh@345 -- # : 1 00:09:42.572 13:29:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.572 13:29:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.572 13:29:34 -- scripts/common.sh@365 -- # decimal 1 00:09:42.572 13:29:34 -- scripts/common.sh@353 -- # local d=1 00:09:42.572 13:29:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.572 13:29:34 -- scripts/common.sh@355 -- # echo 1 00:09:42.572 13:29:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.572 13:29:34 -- scripts/common.sh@366 -- # decimal 2 00:09:42.572 13:29:34 -- scripts/common.sh@353 -- # local d=2 00:09:42.572 13:29:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.572 13:29:34 -- scripts/common.sh@355 -- # echo 2 00:09:42.572 13:29:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.572 13:29:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.572 13:29:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.572 13:29:34 -- scripts/common.sh@368 -- # return 0 00:09:42.572 13:29:34 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.572 13:29:34 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.572 --rc genhtml_branch_coverage=1 00:09:42.572 --rc genhtml_function_coverage=1 00:09:42.572 --rc genhtml_legend=1 00:09:42.572 --rc geninfo_all_blocks=1 00:09:42.572 --rc geninfo_unexecuted_blocks=1 00:09:42.572 00:09:42.572 ' 00:09:42.572 13:29:34 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.572 --rc genhtml_branch_coverage=1 00:09:42.572 --rc genhtml_function_coverage=1 00:09:42.572 --rc genhtml_legend=1 00:09:42.572 --rc geninfo_all_blocks=1 00:09:42.572 --rc geninfo_unexecuted_blocks=1 00:09:42.572 00:09:42.572 ' 00:09:42.572 13:29:34 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.572 --rc genhtml_branch_coverage=1 00:09:42.572 --rc genhtml_function_coverage=1 00:09:42.572 --rc genhtml_legend=1 00:09:42.572 --rc geninfo_all_blocks=1 00:09:42.572 --rc geninfo_unexecuted_blocks=1 00:09:42.572 00:09:42.572 ' 00:09:42.572 13:29:34 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.572 --rc genhtml_branch_coverage=1 00:09:42.572 --rc genhtml_function_coverage=1 00:09:42.572 --rc genhtml_legend=1 00:09:42.572 --rc geninfo_all_blocks=1 00:09:42.572 --rc geninfo_unexecuted_blocks=1 00:09:42.572 00:09:42.572 ' 00:09:42.572 13:29:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.573 13:29:34 -- nvmf/common.sh@7 -- # uname -s 00:09:42.573 13:29:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.573 13:29:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.573 13:29:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.573 13:29:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.573 13:29:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.573 13:29:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.573 13:29:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.573 13:29:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.573 13:29:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.573 13:29:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.573 13:29:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8d44fa66-3027-4e9a-96e5-d14ae0262833 00:09:42.573 13:29:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8d44fa66-3027-4e9a-96e5-d14ae0262833 00:09:42.573 13:29:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.573 13:29:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.573 13:29:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:42.573 13:29:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.573 13:29:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.573 13:29:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.573 13:29:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.573 13:29:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.573 13:29:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.573 13:29:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.573 13:29:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.573 13:29:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.573 13:29:34 -- paths/export.sh@5 -- # export PATH 00:09:42.573 13:29:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.573 13:29:34 -- nvmf/common.sh@51 -- # : 0 00:09:42.573 13:29:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.573 13:29:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.573 13:29:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.573 13:29:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.573 13:29:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.573 13:29:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.573 13:29:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.573 13:29:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.573 13:29:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.573 13:29:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:42.573 13:29:34 -- spdk/autotest.sh@32 -- # uname -s 00:09:42.573 13:29:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:42.573 13:29:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:42.573 13:29:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:42.573 13:29:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:42.573 13:29:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:42.573 13:29:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:42.831 13:29:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:42.831 13:29:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:42.831 13:29:34 -- spdk/autotest.sh@48 -- # udevadm_pid=55259 00:09:42.831 13:29:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:42.831 13:29:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:42.831 13:29:34 -- pm/common@17 -- # local monitor 00:09:42.831 13:29:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:42.831 13:29:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:42.831 13:29:34 -- pm/common@25 -- # sleep 1 00:09:42.831 13:29:34 -- pm/common@21 -- # date +%s 00:09:42.831 13:29:34 -- pm/common@21 -- # date +%s 00:09:42.831 13:29:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109374 00:09:42.831 13:29:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109374 00:09:42.831 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109374_collect-vmstat.pm.log 00:09:42.831 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109374_collect-cpu-load.pm.log 00:09:43.767 13:29:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:43.767 13:29:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:43.767 13:29:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.767 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:43.767 13:29:35 -- spdk/autotest.sh@59 -- # create_test_list 00:09:43.767 13:29:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:43.767 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:43.767 13:29:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:43.767 13:29:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:43.767 13:29:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:43.767 13:29:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:43.767 13:29:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:43.767 13:29:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:43.767 13:29:35 -- common/autotest_common.sh@1457 -- # uname 00:09:43.767 13:29:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:43.767 13:29:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:43.767 13:29:35 -- common/autotest_common.sh@1477 -- # uname 00:09:43.767 13:29:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:43.767 13:29:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:43.767 13:29:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:44.026 lcov: LCOV version 1.15 00:09:44.026 13:29:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:02.107 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:02.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:20.344 13:30:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:20.344 13:30:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.344 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:10:20.344 13:30:11 -- spdk/autotest.sh@78 -- # rm -f 00:10:20.344 13:30:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:20.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:20.344 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:20.344 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:20.344 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:10:20.344 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:10:20.344 13:30:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:20.344 13:30:12 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:20.344 13:30:12 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:20.344 13:30:12 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:10:20.344 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.344 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0c0n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:10:20.344 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.344 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.344 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:20.344 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.344 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.344 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:20.344 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.345 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:10:20.345 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:20.345 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.345 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:10:20.345 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:10:20.345 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.345 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:10:20.345 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:10:20.345 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:20.345 13:30:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:10:20.345 13:30:12 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:10:20.345 13:30:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:10:20.345 13:30:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:20.345 13:30:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:20.345 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.345 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.345 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:20.345 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:20.345 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:20.345 No valid GPT data, bailing 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.345 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.345 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:20.345 1+0 records in 00:10:20.345 1+0 records out 00:10:20.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420898 s, 249 MB/s 00:10:20.345 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.345 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.345 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:20.345 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:20.345 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:20.345 No valid GPT data, bailing 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.345 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.345 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:20.345 1+0 records in 00:10:20.345 1+0 records out 00:10:20.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00344325 s, 305 MB/s 00:10:20.345 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.345 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.345 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:10:20.345 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:10:20.345 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:10:20.345 No valid GPT data, bailing 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:10:20.345 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.345 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.345 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:10:20.345 1+0 records in 00:10:20.345 1+0 records out 00:10:20.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136166 s, 77.0 MB/s 00:10:20.345 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.345 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.345 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:10:20.345 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:10:20.345 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:10:20.604 No valid GPT data, bailing 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.604 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.604 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:10:20.604 1+0 records in 00:10:20.604 1+0 records out 00:10:20.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372922 s, 281 MB/s 00:10:20.604 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.604 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.604 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:10:20.604 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:10:20.604 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:10:20.604 No valid GPT data, bailing 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.604 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.604 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:10:20.604 1+0 records in 00:10:20.604 1+0 records out 00:10:20.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495899 s, 211 MB/s 00:10:20.604 13:30:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:20.604 13:30:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:20.604 13:30:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:10:20.604 13:30:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:10:20.604 13:30:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:10:20.604 No valid GPT data, bailing 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:10:20.604 13:30:12 -- scripts/common.sh@394 -- # pt= 00:10:20.604 13:30:12 -- scripts/common.sh@395 -- # return 1 00:10:20.604 13:30:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:10:20.604 1+0 records in 00:10:20.604 1+0 records out 00:10:20.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419409 s, 250 MB/s 00:10:20.604 13:30:12 -- spdk/autotest.sh@105 -- # sync 00:10:20.863 13:30:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:20.863 13:30:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:20.863 13:30:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:22.764 13:30:14 -- spdk/autotest.sh@111 -- # uname -s 00:10:22.764 13:30:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:22.764 13:30:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:22.764 13:30:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:23.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.590 Hugepages 00:10:23.590 node hugesize free / total 00:10:23.590 node0 1048576kB 0 / 0 00:10:23.590 node0 2048kB 0 / 0 00:10:23.590 00:10:23.590 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:23.848 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:23.848 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:10:23.848 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:10:24.114 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:10:24.114 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:24.114 13:30:15 -- spdk/autotest.sh@117 -- # uname -s 00:10:24.114 13:30:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:24.114 13:30:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:24.114 13:30:15 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.249 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.249 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.249 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.249 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.249 13:30:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:26.184 13:30:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:26.184 13:30:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:26.184 13:30:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:26.184 13:30:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:26.184 13:30:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:26.184 13:30:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:26.184 13:30:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:26.184 13:30:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:26.184 13:30:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:26.442 13:30:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:26.442 13:30:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:26.442 13:30:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:26.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.025 Waiting for block devices as requested 00:10:27.025 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.025 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.025 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.025 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.297 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:32.297 13:30:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:32.297 13:30:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1543 -- # continue 00:10:32.297 13:30:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:32.297 13:30:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1543 -- # continue 00:10:32.297 13:30:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:32.297 13:30:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1543 -- # continue 00:10:32.297 13:30:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:10:32.297 13:30:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:32.297 13:30:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:32.297 13:30:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:32.297 13:30:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:32.297 13:30:24 -- common/autotest_common.sh@1543 -- # continue 00:10:32.297 13:30:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:32.297 13:30:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.297 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:10:32.297 13:30:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:32.297 13:30:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.297 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:10:32.297 13:30:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:32.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.435 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.695 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.695 13:30:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:33.695 13:30:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.695 13:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:33.695 13:30:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:33.695 13:30:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:33.695 13:30:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:33.695 13:30:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:33.695 13:30:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:33.695 13:30:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:33.695 13:30:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:33.695 13:30:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:33.695 13:30:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:33.695 13:30:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:33.695 13:30:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.695 13:30:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.695 13:30:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:33.695 13:30:25 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:33.695 13:30:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:33.695 13:30:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:33.695 13:30:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:33.695 13:30:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:33.695 13:30:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:33.695 13:30:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:33.695 13:30:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:33.695 13:30:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:10:33.695 13:30:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:33.695 13:30:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:33.695 13:30:25 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:10:33.695 13:30:25 -- common/autotest_common.sh@1572 -- # return 0 00:10:33.695 13:30:25 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:10:33.695 13:30:25 -- common/autotest_common.sh@1580 -- # return 0 00:10:33.695 13:30:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:33.695 13:30:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:33.695 13:30:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:33.695 13:30:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:33.695 13:30:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:33.695 13:30:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.695 13:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:33.695 13:30:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:33.695 13:30:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:33.695 13:30:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.695 13:30:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.695 13:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:33.695 ************************************ 00:10:33.695 START TEST env 00:10:33.695 ************************************ 00:10:33.695 13:30:25 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:33.954 * Looking for test storage... 00:10:33.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.955 13:30:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.955 13:30:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.955 13:30:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.955 13:30:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.955 13:30:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.955 13:30:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.955 13:30:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.955 13:30:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.955 13:30:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.955 13:30:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.955 13:30:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.955 13:30:25 env -- scripts/common.sh@344 -- # case "$op" in 00:10:33.955 13:30:25 env -- scripts/common.sh@345 -- # : 1 00:10:33.955 13:30:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.955 13:30:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.955 13:30:25 env -- scripts/common.sh@365 -- # decimal 1 00:10:33.955 13:30:25 env -- scripts/common.sh@353 -- # local d=1 00:10:33.955 13:30:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.955 13:30:25 env -- scripts/common.sh@355 -- # echo 1 00:10:33.955 13:30:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.955 13:30:25 env -- scripts/common.sh@366 -- # decimal 2 00:10:33.955 13:30:25 env -- scripts/common.sh@353 -- # local d=2 00:10:33.955 13:30:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.955 13:30:25 env -- scripts/common.sh@355 -- # echo 2 00:10:33.955 13:30:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.955 13:30:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.955 13:30:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.955 13:30:25 env -- scripts/common.sh@368 -- # return 0 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.955 --rc genhtml_branch_coverage=1 00:10:33.955 --rc genhtml_function_coverage=1 00:10:33.955 --rc genhtml_legend=1 00:10:33.955 --rc geninfo_all_blocks=1 00:10:33.955 --rc geninfo_unexecuted_blocks=1 00:10:33.955 00:10:33.955 ' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.955 --rc genhtml_branch_coverage=1 00:10:33.955 --rc genhtml_function_coverage=1 00:10:33.955 --rc genhtml_legend=1 00:10:33.955 --rc geninfo_all_blocks=1 00:10:33.955 --rc geninfo_unexecuted_blocks=1 00:10:33.955 00:10:33.955 ' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.955 --rc genhtml_branch_coverage=1 00:10:33.955 --rc genhtml_function_coverage=1 00:10:33.955 --rc genhtml_legend=1 00:10:33.955 --rc geninfo_all_blocks=1 00:10:33.955 --rc geninfo_unexecuted_blocks=1 00:10:33.955 00:10:33.955 ' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.955 --rc genhtml_branch_coverage=1 00:10:33.955 --rc genhtml_function_coverage=1 00:10:33.955 --rc genhtml_legend=1 00:10:33.955 --rc geninfo_all_blocks=1 00:10:33.955 --rc geninfo_unexecuted_blocks=1 00:10:33.955 00:10:33.955 ' 00:10:33.955 13:30:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.955 13:30:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.955 13:30:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:33.955 ************************************ 00:10:33.955 START TEST env_memory 00:10:33.955 ************************************ 00:10:33.955 13:30:25 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:33.955 00:10:33.955 00:10:33.955 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.955 http://cunit.sourceforge.net/ 00:10:33.955 00:10:33.955 00:10:33.955 Suite: memory 00:10:33.955 Test: alloc and free memory map ...[2024-11-20 13:30:25.958506] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:34.213 passed 00:10:34.213 Test: mem map translation ...[2024-11-20 13:30:26.028231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:34.213 [2024-11-20 13:30:26.028306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:34.213 [2024-11-20 13:30:26.028406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:34.213 [2024-11-20 13:30:26.028449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:34.213 passed 00:10:34.213 Test: mem map registration ...[2024-11-20 13:30:26.126543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:34.213 [2024-11-20 13:30:26.126639] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:34.213 passed 00:10:34.472 Test: mem map adjacent registrations ...passed 00:10:34.472 00:10:34.472 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.472 suites 1 1 n/a 0 0 00:10:34.472 tests 4 4 4 0 0 00:10:34.472 asserts 152 152 152 0 n/a 00:10:34.472 00:10:34.472 Elapsed time = 0.357 seconds 00:10:34.472 00:10:34.472 real 0m0.398s 00:10:34.472 user 0m0.365s 00:10:34.472 sys 0m0.025s 00:10:34.472 ************************************ 00:10:34.472 END TEST env_memory 00:10:34.472 ************************************ 00:10:34.472 13:30:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.472 13:30:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:34.472 13:30:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:34.472 13:30:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.472 13:30:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.472 13:30:26 env -- common/autotest_common.sh@10 -- # set +x 00:10:34.472 ************************************ 00:10:34.472 START TEST env_vtophys 00:10:34.472 ************************************ 00:10:34.472 13:30:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:34.472 EAL: lib.eal log level changed from notice to debug 00:10:34.472 EAL: Detected lcore 0 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 1 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 2 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 3 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 4 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 5 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 6 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 7 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 8 as core 0 on socket 0 00:10:34.472 EAL: Detected lcore 9 as core 0 on socket 0 00:10:34.472 EAL: Maximum logical cores by configuration: 128 00:10:34.472 EAL: Detected CPU lcores: 10 00:10:34.472 EAL: Detected NUMA nodes: 1 00:10:34.472 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:34.472 EAL: Detected shared linkage of DPDK 00:10:34.472 EAL: No shared files mode enabled, IPC will be disabled 00:10:34.472 EAL: Selected IOVA mode 'PA' 00:10:34.472 EAL: Probing VFIO support... 00:10:34.472 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:34.472 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:34.472 EAL: Ask a virtual area of 0x2e000 bytes 00:10:34.472 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:34.472 EAL: Setting up physically contiguous memory... 00:10:34.472 EAL: Setting maximum number of open files to 524288 00:10:34.472 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:34.472 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:34.472 EAL: Ask a virtual area of 0x61000 bytes 00:10:34.472 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:34.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:34.472 EAL: Ask a virtual area of 0x400000000 bytes 00:10:34.472 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:34.472 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:34.472 EAL: Ask a virtual area of 0x61000 bytes 00:10:34.472 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:34.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:34.472 EAL: Ask a virtual area of 0x400000000 bytes 00:10:34.472 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:34.472 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:34.472 EAL: Ask a virtual area of 0x61000 bytes 00:10:34.472 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:34.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:34.472 EAL: Ask a virtual area of 0x400000000 bytes 00:10:34.472 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:34.472 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:34.472 EAL: Ask a virtual area of 0x61000 bytes 00:10:34.472 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:34.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:34.472 EAL: Ask a virtual area of 0x400000000 bytes 00:10:34.472 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:34.472 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:34.472 EAL: Hugepages will be freed exactly as allocated. 00:10:34.473 EAL: No shared files mode enabled, IPC is disabled 00:10:34.473 EAL: No shared files mode enabled, IPC is disabled 00:10:34.732 EAL: TSC frequency is ~2200000 KHz 00:10:34.732 EAL: Main lcore 0 is ready (tid=7fa740106a40;cpuset=[0]) 00:10:34.732 EAL: Trying to obtain current memory policy. 00:10:34.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.732 EAL: Restoring previous memory policy: 0 00:10:34.732 EAL: request: mp_malloc_sync 00:10:34.732 EAL: No shared files mode enabled, IPC is disabled 00:10:34.732 EAL: Heap on socket 0 was expanded by 2MB 00:10:34.732 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:34.732 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:34.732 EAL: Mem event callback 'spdk:(nil)' registered 00:10:34.732 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:34.732 00:10:34.732 00:10:34.732 CUnit - A unit testing framework for C - Version 2.1-3 00:10:34.732 http://cunit.sourceforge.net/ 00:10:34.732 00:10:34.732 00:10:34.732 Suite: components_suite 00:10:34.992 Test: vtophys_malloc_test ...passed 00:10:34.992 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:34.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.992 EAL: Restoring previous memory policy: 4 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was expanded by 4MB 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was shrunk by 4MB 00:10:34.992 EAL: Trying to obtain current memory policy. 00:10:34.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.992 EAL: Restoring previous memory policy: 4 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was expanded by 6MB 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was shrunk by 6MB 00:10:34.992 EAL: Trying to obtain current memory policy. 00:10:34.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.992 EAL: Restoring previous memory policy: 4 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was expanded by 10MB 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was shrunk by 10MB 00:10:34.992 EAL: Trying to obtain current memory policy. 00:10:34.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.992 EAL: Restoring previous memory policy: 4 00:10:34.992 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.992 EAL: request: mp_malloc_sync 00:10:34.992 EAL: No shared files mode enabled, IPC is disabled 00:10:34.992 EAL: Heap on socket 0 was expanded by 18MB 00:10:35.251 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.251 EAL: request: mp_malloc_sync 00:10:35.251 EAL: No shared files mode enabled, IPC is disabled 00:10:35.251 EAL: Heap on socket 0 was shrunk by 18MB 00:10:35.251 EAL: Trying to obtain current memory policy. 00:10:35.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:35.251 EAL: Restoring previous memory policy: 4 00:10:35.251 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.251 EAL: request: mp_malloc_sync 00:10:35.251 EAL: No shared files mode enabled, IPC is disabled 00:10:35.251 EAL: Heap on socket 0 was expanded by 34MB 00:10:35.251 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.251 EAL: request: mp_malloc_sync 00:10:35.251 EAL: No shared files mode enabled, IPC is disabled 00:10:35.251 EAL: Heap on socket 0 was shrunk by 34MB 00:10:35.251 EAL: Trying to obtain current memory policy. 00:10:35.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:35.251 EAL: Restoring previous memory policy: 4 00:10:35.251 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.251 EAL: request: mp_malloc_sync 00:10:35.251 EAL: No shared files mode enabled, IPC is disabled 00:10:35.251 EAL: Heap on socket 0 was expanded by 66MB 00:10:35.251 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.552 EAL: request: mp_malloc_sync 00:10:35.552 EAL: No shared files mode enabled, IPC is disabled 00:10:35.552 EAL: Heap on socket 0 was shrunk by 66MB 00:10:35.552 EAL: Trying to obtain current memory policy. 00:10:35.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:35.552 EAL: Restoring previous memory policy: 4 00:10:35.552 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.552 EAL: request: mp_malloc_sync 00:10:35.552 EAL: No shared files mode enabled, IPC is disabled 00:10:35.552 EAL: Heap on socket 0 was expanded by 130MB 00:10:35.811 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.811 EAL: request: mp_malloc_sync 00:10:35.811 EAL: No shared files mode enabled, IPC is disabled 00:10:35.811 EAL: Heap on socket 0 was shrunk by 130MB 00:10:35.811 EAL: Trying to obtain current memory policy. 00:10:35.811 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:35.811 EAL: Restoring previous memory policy: 4 00:10:35.811 EAL: Calling mem event callback 'spdk:(nil)' 00:10:35.811 EAL: request: mp_malloc_sync 00:10:35.811 EAL: No shared files mode enabled, IPC is disabled 00:10:35.811 EAL: Heap on socket 0 was expanded by 258MB 00:10:36.378 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.378 EAL: request: mp_malloc_sync 00:10:36.378 EAL: No shared files mode enabled, IPC is disabled 00:10:36.378 EAL: Heap on socket 0 was shrunk by 258MB 00:10:36.637 EAL: Trying to obtain current memory policy. 00:10:36.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.895 EAL: Restoring previous memory policy: 4 00:10:36.895 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.895 EAL: request: mp_malloc_sync 00:10:36.895 EAL: No shared files mode enabled, IPC is disabled 00:10:36.895 EAL: Heap on socket 0 was expanded by 514MB 00:10:37.461 EAL: Calling mem event callback 'spdk:(nil)' 00:10:37.719 EAL: request: mp_malloc_sync 00:10:37.719 EAL: No shared files mode enabled, IPC is disabled 00:10:37.719 EAL: Heap on socket 0 was shrunk by 514MB 00:10:38.287 EAL: Trying to obtain current memory policy. 00:10:38.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:38.545 EAL: Restoring previous memory policy: 4 00:10:38.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:38.545 EAL: request: mp_malloc_sync 00:10:38.545 EAL: No shared files mode enabled, IPC is disabled 00:10:38.545 EAL: Heap on socket 0 was expanded by 1026MB 00:10:40.450 EAL: Calling mem event callback 'spdk:(nil)' 00:10:40.450 EAL: request: mp_malloc_sync 00:10:40.450 EAL: No shared files mode enabled, IPC is disabled 00:10:40.450 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:41.825 passed 00:10:41.825 00:10:41.825 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.825 suites 1 1 n/a 0 0 00:10:41.825 tests 2 2 2 0 0 00:10:41.825 asserts 5565 5565 5565 0 n/a 00:10:41.825 00:10:41.825 Elapsed time = 6.909 seconds 00:10:41.825 EAL: Calling mem event callback 'spdk:(nil)' 00:10:41.825 EAL: request: mp_malloc_sync 00:10:41.825 EAL: No shared files mode enabled, IPC is disabled 00:10:41.825 EAL: Heap on socket 0 was shrunk by 2MB 00:10:41.825 EAL: No shared files mode enabled, IPC is disabled 00:10:41.825 EAL: No shared files mode enabled, IPC is disabled 00:10:41.825 EAL: No shared files mode enabled, IPC is disabled 00:10:41.825 ************************************ 00:10:41.825 END TEST env_vtophys 00:10:41.825 ************************************ 00:10:41.825 00:10:41.825 real 0m7.261s 00:10:41.825 user 0m6.366s 00:10:41.825 sys 0m0.718s 00:10:41.825 13:30:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.825 13:30:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:41.825 13:30:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:41.825 13:30:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.825 13:30:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.825 13:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:10:41.825 ************************************ 00:10:41.825 START TEST env_pci 00:10:41.825 ************************************ 00:10:41.826 13:30:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:41.826 00:10:41.826 00:10:41.826 CUnit - A unit testing framework for C - Version 2.1-3 00:10:41.826 http://cunit.sourceforge.net/ 00:10:41.826 00:10:41.826 00:10:41.826 Suite: pci 00:10:41.826 Test: pci_hook ...[2024-11-20 13:30:33.673324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58111 has claimed it 00:10:41.826 passed 00:10:41.826 00:10:41.826 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.826 suites 1 1 n/a 0 0 00:10:41.826 tests 1 1 1 0 0 00:10:41.826 asserts 25 25 25 0 n/a 00:10:41.826 00:10:41.826 Elapsed time = 0.008 seconds 00:10:41.826 EAL: Cannot find device (10000:00:01.0) 00:10:41.826 EAL: Failed to attach device on primary process 00:10:41.826 ************************************ 00:10:41.826 END TEST env_pci 00:10:41.826 ************************************ 00:10:41.826 00:10:41.826 real 0m0.080s 00:10:41.826 user 0m0.043s 00:10:41.826 sys 0m0.036s 00:10:41.826 13:30:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.826 13:30:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:41.826 13:30:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:41.826 13:30:33 env -- env/env.sh@15 -- # uname 00:10:41.826 13:30:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:41.826 13:30:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:41.826 13:30:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:41.826 13:30:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.826 13:30:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.826 13:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:10:41.826 ************************************ 00:10:41.826 START TEST env_dpdk_post_init 00:10:41.826 ************************************ 00:10:41.826 13:30:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:41.826 EAL: Detected CPU lcores: 10 00:10:41.826 EAL: Detected NUMA nodes: 1 00:10:41.826 EAL: Detected shared linkage of DPDK 00:10:42.085 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:42.085 EAL: Selected IOVA mode 'PA' 00:10:42.085 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:42.085 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:42.085 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:42.085 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:10:42.085 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:10:42.085 Starting DPDK initialization... 00:10:42.085 Starting SPDK post initialization... 00:10:42.085 SPDK NVMe probe 00:10:42.085 Attaching to 0000:00:10.0 00:10:42.085 Attaching to 0000:00:11.0 00:10:42.085 Attaching to 0000:00:12.0 00:10:42.085 Attaching to 0000:00:13.0 00:10:42.085 Attached to 0000:00:10.0 00:10:42.085 Attached to 0000:00:11.0 00:10:42.085 Attached to 0000:00:13.0 00:10:42.085 Attached to 0000:00:12.0 00:10:42.085 Cleaning up... 00:10:42.085 00:10:42.085 real 0m0.303s 00:10:42.085 user 0m0.112s 00:10:42.085 sys 0m0.091s 00:10:42.085 ************************************ 00:10:42.085 END TEST env_dpdk_post_init 00:10:42.085 ************************************ 00:10:42.085 13:30:34 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.085 13:30:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:42.085 13:30:34 env -- env/env.sh@26 -- # uname 00:10:42.085 13:30:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:42.085 13:30:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:42.085 13:30:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.085 13:30:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.085 13:30:34 env -- common/autotest_common.sh@10 -- # set +x 00:10:42.344 ************************************ 00:10:42.344 START TEST env_mem_callbacks 00:10:42.344 ************************************ 00:10:42.344 13:30:34 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:42.344 EAL: Detected CPU lcores: 10 00:10:42.344 EAL: Detected NUMA nodes: 1 00:10:42.344 EAL: Detected shared linkage of DPDK 00:10:42.344 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:42.344 EAL: Selected IOVA mode 'PA' 00:10:42.344 00:10:42.344 00:10:42.344 CUnit - A unit testing framework for C - Version 2.1-3 00:10:42.344 http://cunit.sourceforge.net/ 00:10:42.344 00:10:42.344 00:10:42.344 Suite: memory 00:10:42.344 Test: test ... 00:10:42.344 register 0x200000200000 2097152 00:10:42.344 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:42.344 malloc 3145728 00:10:42.344 register 0x200000400000 4194304 00:10:42.344 buf 0x2000004fffc0 len 3145728 PASSED 00:10:42.344 malloc 64 00:10:42.344 buf 0x2000004ffec0 len 64 PASSED 00:10:42.344 malloc 4194304 00:10:42.344 register 0x200000800000 6291456 00:10:42.344 buf 0x2000009fffc0 len 4194304 PASSED 00:10:42.344 free 0x2000004fffc0 3145728 00:10:42.344 free 0x2000004ffec0 64 00:10:42.344 unregister 0x200000400000 4194304 PASSED 00:10:42.344 free 0x2000009fffc0 4194304 00:10:42.344 unregister 0x200000800000 6291456 PASSED 00:10:42.344 malloc 8388608 00:10:42.344 register 0x200000400000 10485760 00:10:42.344 buf 0x2000005fffc0 len 8388608 PASSED 00:10:42.344 free 0x2000005fffc0 8388608 00:10:42.344 unregister 0x200000400000 10485760 PASSED 00:10:42.344 passed 00:10:42.344 00:10:42.344 Run Summary: Type Total Ran Passed Failed Inactive 00:10:42.344 suites 1 1 n/a 0 0 00:10:42.344 tests 1 1 1 0 0 00:10:42.344 asserts 15 15 15 0 n/a 00:10:42.344 00:10:42.344 Elapsed time = 0.071 seconds 00:10:42.602 ************************************ 00:10:42.602 END TEST env_mem_callbacks 00:10:42.602 ************************************ 00:10:42.602 00:10:42.602 real 0m0.261s 00:10:42.602 user 0m0.105s 00:10:42.602 sys 0m0.052s 00:10:42.602 13:30:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.602 13:30:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:42.602 ************************************ 00:10:42.602 END TEST env 00:10:42.602 ************************************ 00:10:42.602 00:10:42.602 real 0m8.735s 00:10:42.602 user 0m7.192s 00:10:42.602 sys 0m1.138s 00:10:42.602 13:30:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.602 13:30:34 env -- common/autotest_common.sh@10 -- # set +x 00:10:42.602 13:30:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:42.602 13:30:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.602 13:30:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.602 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:10:42.602 ************************************ 00:10:42.602 START TEST rpc 00:10:42.602 ************************************ 00:10:42.602 13:30:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:42.602 * Looking for test storage... 00:10:42.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:42.602 13:30:34 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.602 13:30:34 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.602 13:30:34 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.602 13:30:34 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.602 13:30:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.602 13:30:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.602 13:30:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.602 13:30:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.602 13:30:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.602 13:30:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:42.602 13:30:34 rpc -- scripts/common.sh@345 -- # : 1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.602 13:30:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.602 13:30:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@353 -- # local d=1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.602 13:30:34 rpc -- scripts/common.sh@355 -- # echo 1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.602 13:30:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@353 -- # local d=2 00:10:42.602 13:30:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.602 13:30:34 rpc -- scripts/common.sh@355 -- # echo 2 00:10:42.861 13:30:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.861 13:30:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.861 13:30:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.861 13:30:34 rpc -- scripts/common.sh@368 -- # return 0 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.861 --rc genhtml_branch_coverage=1 00:10:42.861 --rc genhtml_function_coverage=1 00:10:42.861 --rc genhtml_legend=1 00:10:42.861 --rc geninfo_all_blocks=1 00:10:42.861 --rc geninfo_unexecuted_blocks=1 00:10:42.861 00:10:42.861 ' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.861 --rc genhtml_branch_coverage=1 00:10:42.861 --rc genhtml_function_coverage=1 00:10:42.861 --rc genhtml_legend=1 00:10:42.861 --rc geninfo_all_blocks=1 00:10:42.861 --rc geninfo_unexecuted_blocks=1 00:10:42.861 00:10:42.861 ' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.861 --rc genhtml_branch_coverage=1 00:10:42.861 --rc genhtml_function_coverage=1 00:10:42.861 --rc genhtml_legend=1 00:10:42.861 --rc geninfo_all_blocks=1 00:10:42.861 --rc geninfo_unexecuted_blocks=1 00:10:42.861 00:10:42.861 ' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.861 --rc genhtml_branch_coverage=1 00:10:42.861 --rc genhtml_function_coverage=1 00:10:42.861 --rc genhtml_legend=1 00:10:42.861 --rc geninfo_all_blocks=1 00:10:42.861 --rc geninfo_unexecuted_blocks=1 00:10:42.861 00:10:42.861 ' 00:10:42.861 13:30:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58238 00:10:42.861 13:30:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.861 13:30:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:42.861 13:30:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58238 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 58238 ']' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.861 13:30:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.861 [2024-11-20 13:30:34.753664] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:10:42.861 [2024-11-20 13:30:34.754069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58238 ] 00:10:43.120 [2024-11-20 13:30:34.928987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.120 [2024-11-20 13:30:35.034166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:43.120 [2024-11-20 13:30:35.034479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58238' to capture a snapshot of events at runtime. 00:10:43.120 [2024-11-20 13:30:35.034706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.120 [2024-11-20 13:30:35.035003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.120 [2024-11-20 13:30:35.035172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58238 for offline analysis/debug. 00:10:43.120 [2024-11-20 13:30:35.036665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.056 13:30:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.056 13:30:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:44.056 13:30:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:44.056 13:30:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:44.056 13:30:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:44.056 13:30:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:44.056 13:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.056 13:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.056 13:30:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 ************************************ 00:10:44.056 START TEST rpc_integrity 00:10:44.056 ************************************ 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 13:30:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:44.056 { 00:10:44.056 "name": "Malloc0", 00:10:44.056 "aliases": [ 00:10:44.056 "bdc1f5de-f33f-420c-b735-e067be942813" 00:10:44.056 ], 00:10:44.056 "product_name": "Malloc disk", 00:10:44.056 "block_size": 512, 00:10:44.056 "num_blocks": 16384, 00:10:44.056 "uuid": "bdc1f5de-f33f-420c-b735-e067be942813", 00:10:44.056 "assigned_rate_limits": { 00:10:44.056 "rw_ios_per_sec": 0, 00:10:44.056 "rw_mbytes_per_sec": 0, 00:10:44.056 "r_mbytes_per_sec": 0, 00:10:44.056 "w_mbytes_per_sec": 0 00:10:44.056 }, 00:10:44.056 "claimed": false, 00:10:44.056 "zoned": false, 00:10:44.056 "supported_io_types": { 00:10:44.056 "read": true, 00:10:44.056 "write": true, 00:10:44.056 "unmap": true, 00:10:44.056 "flush": true, 00:10:44.056 "reset": true, 00:10:44.056 "nvme_admin": false, 00:10:44.056 "nvme_io": false, 00:10:44.056 "nvme_io_md": false, 00:10:44.056 "write_zeroes": true, 00:10:44.056 "zcopy": true, 00:10:44.056 "get_zone_info": false, 00:10:44.056 "zone_management": false, 00:10:44.056 "zone_append": false, 00:10:44.056 "compare": false, 00:10:44.056 "compare_and_write": false, 00:10:44.056 "abort": true, 00:10:44.056 "seek_hole": false, 00:10:44.056 "seek_data": false, 00:10:44.056 "copy": true, 00:10:44.056 "nvme_iov_md": false 00:10:44.056 }, 00:10:44.056 "memory_domains": [ 00:10:44.056 { 00:10:44.056 "dma_device_id": "system", 00:10:44.056 "dma_device_type": 1 00:10:44.056 }, 00:10:44.056 { 00:10:44.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.056 "dma_device_type": 2 00:10:44.056 } 00:10:44.056 ], 00:10:44.056 "driver_specific": {} 00:10:44.056 } 00:10:44.056 ]' 00:10:44.056 13:30:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:44.056 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:44.056 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 [2024-11-20 13:30:36.045125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:44.056 [2024-11-20 13:30:36.045219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.056 [2024-11-20 13:30:36.045264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:44.056 [2024-11-20 13:30:36.045283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.056 [2024-11-20 13:30:36.048237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.056 [2024-11-20 13:30:36.048294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:44.056 Passthru0 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.056 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.056 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:44.056 { 00:10:44.056 "name": "Malloc0", 00:10:44.056 "aliases": [ 00:10:44.056 "bdc1f5de-f33f-420c-b735-e067be942813" 00:10:44.056 ], 00:10:44.056 "product_name": "Malloc disk", 00:10:44.056 "block_size": 512, 00:10:44.056 "num_blocks": 16384, 00:10:44.056 "uuid": "bdc1f5de-f33f-420c-b735-e067be942813", 00:10:44.056 "assigned_rate_limits": { 00:10:44.056 "rw_ios_per_sec": 0, 00:10:44.057 "rw_mbytes_per_sec": 0, 00:10:44.057 "r_mbytes_per_sec": 0, 00:10:44.057 "w_mbytes_per_sec": 0 00:10:44.057 }, 00:10:44.057 "claimed": true, 00:10:44.057 "claim_type": "exclusive_write", 00:10:44.057 "zoned": false, 00:10:44.057 "supported_io_types": { 00:10:44.057 "read": true, 00:10:44.057 "write": true, 00:10:44.057 "unmap": true, 00:10:44.057 "flush": true, 00:10:44.057 "reset": true, 00:10:44.057 "nvme_admin": false, 00:10:44.057 "nvme_io": false, 00:10:44.057 "nvme_io_md": false, 00:10:44.057 "write_zeroes": true, 00:10:44.057 "zcopy": true, 00:10:44.057 "get_zone_info": false, 00:10:44.057 "zone_management": false, 00:10:44.057 "zone_append": false, 00:10:44.057 "compare": false, 00:10:44.057 "compare_and_write": false, 00:10:44.057 "abort": true, 00:10:44.057 "seek_hole": false, 00:10:44.057 "seek_data": false, 00:10:44.057 "copy": true, 00:10:44.057 "nvme_iov_md": false 00:10:44.057 }, 00:10:44.057 "memory_domains": [ 00:10:44.057 { 00:10:44.057 "dma_device_id": "system", 00:10:44.057 "dma_device_type": 1 00:10:44.057 }, 00:10:44.057 { 00:10:44.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.057 "dma_device_type": 2 00:10:44.057 } 00:10:44.057 ], 00:10:44.057 "driver_specific": {} 00:10:44.057 }, 00:10:44.057 { 00:10:44.057 "name": "Passthru0", 00:10:44.057 "aliases": [ 00:10:44.057 "43ef7083-dc89-556e-8da4-a58ed06d91dc" 00:10:44.057 ], 00:10:44.057 "product_name": "passthru", 00:10:44.057 "block_size": 512, 00:10:44.057 "num_blocks": 16384, 00:10:44.057 "uuid": "43ef7083-dc89-556e-8da4-a58ed06d91dc", 00:10:44.057 "assigned_rate_limits": { 00:10:44.057 "rw_ios_per_sec": 0, 00:10:44.057 "rw_mbytes_per_sec": 0, 00:10:44.057 "r_mbytes_per_sec": 0, 00:10:44.057 "w_mbytes_per_sec": 0 00:10:44.057 }, 00:10:44.057 "claimed": false, 00:10:44.057 "zoned": false, 00:10:44.057 "supported_io_types": { 00:10:44.057 "read": true, 00:10:44.057 "write": true, 00:10:44.057 "unmap": true, 00:10:44.057 "flush": true, 00:10:44.057 "reset": true, 00:10:44.057 "nvme_admin": false, 00:10:44.057 "nvme_io": false, 00:10:44.057 "nvme_io_md": false, 00:10:44.057 "write_zeroes": true, 00:10:44.057 "zcopy": true, 00:10:44.057 "get_zone_info": false, 00:10:44.057 "zone_management": false, 00:10:44.057 "zone_append": false, 00:10:44.057 "compare": false, 00:10:44.057 "compare_and_write": false, 00:10:44.057 "abort": true, 00:10:44.057 "seek_hole": false, 00:10:44.057 "seek_data": false, 00:10:44.057 "copy": true, 00:10:44.057 "nvme_iov_md": false 00:10:44.057 }, 00:10:44.057 "memory_domains": [ 00:10:44.057 { 00:10:44.057 "dma_device_id": "system", 00:10:44.057 "dma_device_type": 1 00:10:44.057 }, 00:10:44.057 { 00:10:44.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.057 "dma_device_type": 2 00:10:44.057 } 00:10:44.057 ], 00:10:44.057 "driver_specific": { 00:10:44.057 "passthru": { 00:10:44.057 "name": "Passthru0", 00:10:44.057 "base_bdev_name": "Malloc0" 00:10:44.057 } 00:10:44.057 } 00:10:44.057 } 00:10:44.057 ]' 00:10:44.057 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:44.316 ************************************ 00:10:44.316 END TEST rpc_integrity 00:10:44.316 ************************************ 00:10:44.316 13:30:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:44.316 00:10:44.316 real 0m0.362s 00:10:44.316 user 0m0.233s 00:10:44.316 sys 0m0.034s 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:44.316 13:30:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.316 13:30:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.316 13:30:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 ************************************ 00:10:44.316 START TEST rpc_plugins 00:10:44.316 ************************************ 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:44.316 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.316 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:44.316 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:44.316 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.316 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:44.316 { 00:10:44.316 "name": "Malloc1", 00:10:44.316 "aliases": [ 00:10:44.316 "ecf69640-3d72-4bfc-9d02-3d9df53d91a0" 00:10:44.316 ], 00:10:44.316 "product_name": "Malloc disk", 00:10:44.316 "block_size": 4096, 00:10:44.316 "num_blocks": 256, 00:10:44.316 "uuid": "ecf69640-3d72-4bfc-9d02-3d9df53d91a0", 00:10:44.316 "assigned_rate_limits": { 00:10:44.316 "rw_ios_per_sec": 0, 00:10:44.316 "rw_mbytes_per_sec": 0, 00:10:44.316 "r_mbytes_per_sec": 0, 00:10:44.316 "w_mbytes_per_sec": 0 00:10:44.316 }, 00:10:44.316 "claimed": false, 00:10:44.316 "zoned": false, 00:10:44.316 "supported_io_types": { 00:10:44.316 "read": true, 00:10:44.316 "write": true, 00:10:44.316 "unmap": true, 00:10:44.316 "flush": true, 00:10:44.317 "reset": true, 00:10:44.317 "nvme_admin": false, 00:10:44.317 "nvme_io": false, 00:10:44.317 "nvme_io_md": false, 00:10:44.317 "write_zeroes": true, 00:10:44.317 "zcopy": true, 00:10:44.317 "get_zone_info": false, 00:10:44.317 "zone_management": false, 00:10:44.317 "zone_append": false, 00:10:44.317 "compare": false, 00:10:44.317 "compare_and_write": false, 00:10:44.317 "abort": true, 00:10:44.317 "seek_hole": false, 00:10:44.317 "seek_data": false, 00:10:44.317 "copy": true, 00:10:44.317 "nvme_iov_md": false 00:10:44.317 }, 00:10:44.317 "memory_domains": [ 00:10:44.317 { 00:10:44.317 "dma_device_id": "system", 00:10:44.317 "dma_device_type": 1 00:10:44.317 }, 00:10:44.317 { 00:10:44.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.317 "dma_device_type": 2 00:10:44.317 } 00:10:44.317 ], 00:10:44.317 "driver_specific": {} 00:10:44.317 } 00:10:44.317 ]' 00:10:44.317 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:44.576 ************************************ 00:10:44.576 END TEST rpc_plugins 00:10:44.576 ************************************ 00:10:44.576 13:30:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:44.576 00:10:44.576 real 0m0.162s 00:10:44.576 user 0m0.112s 00:10:44.576 sys 0m0.009s 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.576 13:30:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 13:30:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:44.576 13:30:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.576 13:30:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.576 13:30:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 ************************************ 00:10:44.576 START TEST rpc_trace_cmd_test 00:10:44.576 ************************************ 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:44.576 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58238", 00:10:44.576 "tpoint_group_mask": "0x8", 00:10:44.576 "iscsi_conn": { 00:10:44.576 "mask": "0x2", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "scsi": { 00:10:44.576 "mask": "0x4", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "bdev": { 00:10:44.576 "mask": "0x8", 00:10:44.576 "tpoint_mask": "0xffffffffffffffff" 00:10:44.576 }, 00:10:44.576 "nvmf_rdma": { 00:10:44.576 "mask": "0x10", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "nvmf_tcp": { 00:10:44.576 "mask": "0x20", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "ftl": { 00:10:44.576 "mask": "0x40", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "blobfs": { 00:10:44.576 "mask": "0x80", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "dsa": { 00:10:44.576 "mask": "0x200", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "thread": { 00:10:44.576 "mask": "0x400", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "nvme_pcie": { 00:10:44.576 "mask": "0x800", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "iaa": { 00:10:44.576 "mask": "0x1000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "nvme_tcp": { 00:10:44.576 "mask": "0x2000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "bdev_nvme": { 00:10:44.576 "mask": "0x4000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "sock": { 00:10:44.576 "mask": "0x8000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "blob": { 00:10:44.576 "mask": "0x10000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "bdev_raid": { 00:10:44.576 "mask": "0x20000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 }, 00:10:44.576 "scheduler": { 00:10:44.576 "mask": "0x40000", 00:10:44.576 "tpoint_mask": "0x0" 00:10:44.576 } 00:10:44.576 }' 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:44.576 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:44.835 ************************************ 00:10:44.835 END TEST rpc_trace_cmd_test 00:10:44.835 ************************************ 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:44.835 00:10:44.835 real 0m0.281s 00:10:44.835 user 0m0.239s 00:10:44.835 sys 0m0.032s 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.835 13:30:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.835 13:30:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:44.835 13:30:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:44.835 13:30:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:44.835 13:30:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.835 13:30:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.835 13:30:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.835 ************************************ 00:10:44.835 START TEST rpc_daemon_integrity 00:10:44.835 ************************************ 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:44.835 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:45.094 { 00:10:45.094 "name": "Malloc2", 00:10:45.094 "aliases": [ 00:10:45.094 "306577ba-2577-4644-a4b0-48c415a7e893" 00:10:45.094 ], 00:10:45.094 "product_name": "Malloc disk", 00:10:45.094 "block_size": 512, 00:10:45.094 "num_blocks": 16384, 00:10:45.094 "uuid": "306577ba-2577-4644-a4b0-48c415a7e893", 00:10:45.094 "assigned_rate_limits": { 00:10:45.094 "rw_ios_per_sec": 0, 00:10:45.094 "rw_mbytes_per_sec": 0, 00:10:45.094 "r_mbytes_per_sec": 0, 00:10:45.094 "w_mbytes_per_sec": 0 00:10:45.094 }, 00:10:45.094 "claimed": false, 00:10:45.094 "zoned": false, 00:10:45.094 "supported_io_types": { 00:10:45.094 "read": true, 00:10:45.094 "write": true, 00:10:45.094 "unmap": true, 00:10:45.094 "flush": true, 00:10:45.094 "reset": true, 00:10:45.094 "nvme_admin": false, 00:10:45.094 "nvme_io": false, 00:10:45.094 "nvme_io_md": false, 00:10:45.094 "write_zeroes": true, 00:10:45.094 "zcopy": true, 00:10:45.094 "get_zone_info": false, 00:10:45.094 "zone_management": false, 00:10:45.094 "zone_append": false, 00:10:45.094 "compare": false, 00:10:45.094 "compare_and_write": false, 00:10:45.094 "abort": true, 00:10:45.094 "seek_hole": false, 00:10:45.094 "seek_data": false, 00:10:45.094 "copy": true, 00:10:45.094 "nvme_iov_md": false 00:10:45.094 }, 00:10:45.094 "memory_domains": [ 00:10:45.094 { 00:10:45.094 "dma_device_id": "system", 00:10:45.094 "dma_device_type": 1 00:10:45.094 }, 00:10:45.094 { 00:10:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.094 "dma_device_type": 2 00:10:45.094 } 00:10:45.094 ], 00:10:45.094 "driver_specific": {} 00:10:45.094 } 00:10:45.094 ]' 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 [2024-11-20 13:30:36.991784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:45.094 [2024-11-20 13:30:36.991880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.094 [2024-11-20 13:30:36.991916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:45.094 [2024-11-20 13:30:36.991934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.094 [2024-11-20 13:30:36.994749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.094 [2024-11-20 13:30:36.994808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:45.094 Passthru0 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:45.094 { 00:10:45.094 "name": "Malloc2", 00:10:45.094 "aliases": [ 00:10:45.094 "306577ba-2577-4644-a4b0-48c415a7e893" 00:10:45.094 ], 00:10:45.094 "product_name": "Malloc disk", 00:10:45.094 "block_size": 512, 00:10:45.094 "num_blocks": 16384, 00:10:45.094 "uuid": "306577ba-2577-4644-a4b0-48c415a7e893", 00:10:45.094 "assigned_rate_limits": { 00:10:45.094 "rw_ios_per_sec": 0, 00:10:45.094 "rw_mbytes_per_sec": 0, 00:10:45.094 "r_mbytes_per_sec": 0, 00:10:45.094 "w_mbytes_per_sec": 0 00:10:45.094 }, 00:10:45.094 "claimed": true, 00:10:45.094 "claim_type": "exclusive_write", 00:10:45.094 "zoned": false, 00:10:45.094 "supported_io_types": { 00:10:45.094 "read": true, 00:10:45.094 "write": true, 00:10:45.094 "unmap": true, 00:10:45.094 "flush": true, 00:10:45.094 "reset": true, 00:10:45.094 "nvme_admin": false, 00:10:45.094 "nvme_io": false, 00:10:45.094 "nvme_io_md": false, 00:10:45.094 "write_zeroes": true, 00:10:45.094 "zcopy": true, 00:10:45.094 "get_zone_info": false, 00:10:45.094 "zone_management": false, 00:10:45.094 "zone_append": false, 00:10:45.094 "compare": false, 00:10:45.094 "compare_and_write": false, 00:10:45.094 "abort": true, 00:10:45.094 "seek_hole": false, 00:10:45.094 "seek_data": false, 00:10:45.094 "copy": true, 00:10:45.094 "nvme_iov_md": false 00:10:45.094 }, 00:10:45.094 "memory_domains": [ 00:10:45.094 { 00:10:45.094 "dma_device_id": "system", 00:10:45.094 "dma_device_type": 1 00:10:45.094 }, 00:10:45.094 { 00:10:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.094 "dma_device_type": 2 00:10:45.094 } 00:10:45.094 ], 00:10:45.094 "driver_specific": {} 00:10:45.094 }, 00:10:45.094 { 00:10:45.094 "name": "Passthru0", 00:10:45.094 "aliases": [ 00:10:45.094 "d7b2ee32-195a-5f25-8348-819b12a5eeb5" 00:10:45.094 ], 00:10:45.094 "product_name": "passthru", 00:10:45.094 "block_size": 512, 00:10:45.094 "num_blocks": 16384, 00:10:45.094 "uuid": "d7b2ee32-195a-5f25-8348-819b12a5eeb5", 00:10:45.094 "assigned_rate_limits": { 00:10:45.094 "rw_ios_per_sec": 0, 00:10:45.094 "rw_mbytes_per_sec": 0, 00:10:45.094 "r_mbytes_per_sec": 0, 00:10:45.094 "w_mbytes_per_sec": 0 00:10:45.094 }, 00:10:45.094 "claimed": false, 00:10:45.094 "zoned": false, 00:10:45.094 "supported_io_types": { 00:10:45.094 "read": true, 00:10:45.094 "write": true, 00:10:45.094 "unmap": true, 00:10:45.094 "flush": true, 00:10:45.094 "reset": true, 00:10:45.094 "nvme_admin": false, 00:10:45.094 "nvme_io": false, 00:10:45.094 "nvme_io_md": false, 00:10:45.094 "write_zeroes": true, 00:10:45.094 "zcopy": true, 00:10:45.094 "get_zone_info": false, 00:10:45.094 "zone_management": false, 00:10:45.094 "zone_append": false, 00:10:45.094 "compare": false, 00:10:45.094 "compare_and_write": false, 00:10:45.094 "abort": true, 00:10:45.094 "seek_hole": false, 00:10:45.094 "seek_data": false, 00:10:45.094 "copy": true, 00:10:45.094 "nvme_iov_md": false 00:10:45.094 }, 00:10:45.094 "memory_domains": [ 00:10:45.094 { 00:10:45.094 "dma_device_id": "system", 00:10:45.094 "dma_device_type": 1 00:10:45.094 }, 00:10:45.094 { 00:10:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.094 "dma_device_type": 2 00:10:45.094 } 00:10:45.094 ], 00:10:45.094 "driver_specific": { 00:10:45.094 "passthru": { 00:10:45.094 "name": "Passthru0", 00:10:45.094 "base_bdev_name": "Malloc2" 00:10:45.094 } 00:10:45.094 } 00:10:45.094 } 00:10:45.094 ]' 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.094 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:45.353 ************************************ 00:10:45.353 END TEST rpc_daemon_integrity 00:10:45.353 ************************************ 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:45.353 00:10:45.353 real 0m0.397s 00:10:45.353 user 0m0.268s 00:10:45.353 sys 0m0.038s 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.353 13:30:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 13:30:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:45.353 13:30:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58238 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 58238 ']' 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@958 -- # kill -0 58238 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@959 -- # uname 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58238 00:10:45.353 killing process with pid 58238 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58238' 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@973 -- # kill 58238 00:10:45.353 13:30:37 rpc -- common/autotest_common.sh@978 -- # wait 58238 00:10:47.885 00:10:47.885 real 0m4.908s 00:10:47.885 user 0m5.889s 00:10:47.885 sys 0m0.753s 00:10:47.885 13:30:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.885 ************************************ 00:10:47.885 13:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.885 END TEST rpc 00:10:47.885 ************************************ 00:10:47.885 13:30:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:47.885 13:30:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.885 13:30:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.885 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:10:47.885 ************************************ 00:10:47.885 START TEST skip_rpc 00:10:47.885 ************************************ 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:47.885 * Looking for test storage... 00:10:47.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.885 13:30:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.885 --rc genhtml_branch_coverage=1 00:10:47.885 --rc genhtml_function_coverage=1 00:10:47.885 --rc genhtml_legend=1 00:10:47.885 --rc geninfo_all_blocks=1 00:10:47.885 --rc geninfo_unexecuted_blocks=1 00:10:47.885 00:10:47.885 ' 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.885 --rc genhtml_branch_coverage=1 00:10:47.885 --rc genhtml_function_coverage=1 00:10:47.885 --rc genhtml_legend=1 00:10:47.885 --rc geninfo_all_blocks=1 00:10:47.885 --rc geninfo_unexecuted_blocks=1 00:10:47.885 00:10:47.885 ' 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.885 --rc genhtml_branch_coverage=1 00:10:47.885 --rc genhtml_function_coverage=1 00:10:47.885 --rc genhtml_legend=1 00:10:47.885 --rc geninfo_all_blocks=1 00:10:47.885 --rc geninfo_unexecuted_blocks=1 00:10:47.885 00:10:47.885 ' 00:10:47.885 13:30:39 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.885 --rc genhtml_branch_coverage=1 00:10:47.885 --rc genhtml_function_coverage=1 00:10:47.885 --rc genhtml_legend=1 00:10:47.885 --rc geninfo_all_blocks=1 00:10:47.886 --rc geninfo_unexecuted_blocks=1 00:10:47.886 00:10:47.886 ' 00:10:47.886 13:30:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:47.886 13:30:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:47.886 13:30:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:47.886 13:30:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.886 13:30:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.886 13:30:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.886 ************************************ 00:10:47.886 START TEST skip_rpc 00:10:47.886 ************************************ 00:10:47.886 13:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:47.886 13:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58462 00:10:47.886 13:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:47.886 13:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:47.886 13:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:47.886 [2024-11-20 13:30:39.783556] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:10:47.886 [2024-11-20 13:30:39.784143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58462 ] 00:10:48.145 [2024-11-20 13:30:39.975194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.145 [2024-11-20 13:30:40.099825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58462 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58462 ']' 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58462 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:53.461 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58462 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.462 killing process with pid 58462 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58462' 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58462 00:10:53.462 13:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58462 00:10:54.837 00:10:54.837 real 0m7.093s 00:10:54.837 user 0m6.667s 00:10:54.837 sys 0m0.309s 00:10:54.837 13:30:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.837 13:30:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.837 ************************************ 00:10:54.837 END TEST skip_rpc 00:10:54.837 ************************************ 00:10:54.837 13:30:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:54.837 13:30:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.837 13:30:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.837 13:30:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.837 ************************************ 00:10:54.837 START TEST skip_rpc_with_json 00:10:54.837 ************************************ 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58566 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58566 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58566 ']' 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.837 13:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:55.096 [2024-11-20 13:30:46.924441] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:10:55.096 [2024-11-20 13:30:46.924622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58566 ] 00:10:55.353 [2024-11-20 13:30:47.151803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.353 [2024-11-20 13:30:47.288440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:56.289 [2024-11-20 13:30:48.050576] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:56.289 request: 00:10:56.289 { 00:10:56.289 "trtype": "tcp", 00:10:56.289 "method": "nvmf_get_transports", 00:10:56.289 "req_id": 1 00:10:56.289 } 00:10:56.289 Got JSON-RPC error response 00:10:56.289 response: 00:10:56.289 { 00:10:56.289 "code": -19, 00:10:56.289 "message": "No such device" 00:10:56.289 } 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:56.289 [2024-11-20 13:30:48.058703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.289 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:56.289 { 00:10:56.289 "subsystems": [ 00:10:56.289 { 00:10:56.289 "subsystem": "fsdev", 00:10:56.289 "config": [ 00:10:56.289 { 00:10:56.289 "method": "fsdev_set_opts", 00:10:56.289 "params": { 00:10:56.289 "fsdev_io_pool_size": 65535, 00:10:56.289 "fsdev_io_cache_size": 256 00:10:56.289 } 00:10:56.289 } 00:10:56.289 ] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "keyring", 00:10:56.289 "config": [] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "iobuf", 00:10:56.289 "config": [ 00:10:56.289 { 00:10:56.289 "method": "iobuf_set_options", 00:10:56.289 "params": { 00:10:56.289 "small_pool_count": 8192, 00:10:56.289 "large_pool_count": 1024, 00:10:56.289 "small_bufsize": 8192, 00:10:56.289 "large_bufsize": 135168, 00:10:56.289 "enable_numa": false 00:10:56.289 } 00:10:56.289 } 00:10:56.289 ] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "sock", 00:10:56.289 "config": [ 00:10:56.289 { 00:10:56.289 "method": "sock_set_default_impl", 00:10:56.289 "params": { 00:10:56.289 "impl_name": "posix" 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "sock_impl_set_options", 00:10:56.289 "params": { 00:10:56.289 "impl_name": "ssl", 00:10:56.289 "recv_buf_size": 4096, 00:10:56.289 "send_buf_size": 4096, 00:10:56.289 "enable_recv_pipe": true, 00:10:56.289 "enable_quickack": false, 00:10:56.289 "enable_placement_id": 0, 00:10:56.289 "enable_zerocopy_send_server": true, 00:10:56.289 "enable_zerocopy_send_client": false, 00:10:56.289 "zerocopy_threshold": 0, 00:10:56.289 "tls_version": 0, 00:10:56.289 "enable_ktls": false 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "sock_impl_set_options", 00:10:56.289 "params": { 00:10:56.289 "impl_name": "posix", 00:10:56.289 "recv_buf_size": 2097152, 00:10:56.289 "send_buf_size": 2097152, 00:10:56.289 "enable_recv_pipe": true, 00:10:56.289 "enable_quickack": false, 00:10:56.289 "enable_placement_id": 0, 00:10:56.289 "enable_zerocopy_send_server": true, 00:10:56.289 "enable_zerocopy_send_client": false, 00:10:56.289 "zerocopy_threshold": 0, 00:10:56.289 "tls_version": 0, 00:10:56.289 "enable_ktls": false 00:10:56.289 } 00:10:56.289 } 00:10:56.289 ] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "vmd", 00:10:56.289 "config": [] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "accel", 00:10:56.289 "config": [ 00:10:56.289 { 00:10:56.289 "method": "accel_set_options", 00:10:56.289 "params": { 00:10:56.289 "small_cache_size": 128, 00:10:56.289 "large_cache_size": 16, 00:10:56.289 "task_count": 2048, 00:10:56.289 "sequence_count": 2048, 00:10:56.289 "buf_count": 2048 00:10:56.289 } 00:10:56.289 } 00:10:56.289 ] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "bdev", 00:10:56.289 "config": [ 00:10:56.289 { 00:10:56.289 "method": "bdev_set_options", 00:10:56.289 "params": { 00:10:56.289 "bdev_io_pool_size": 65535, 00:10:56.289 "bdev_io_cache_size": 256, 00:10:56.289 "bdev_auto_examine": true, 00:10:56.289 "iobuf_small_cache_size": 128, 00:10:56.289 "iobuf_large_cache_size": 16 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "bdev_raid_set_options", 00:10:56.289 "params": { 00:10:56.289 "process_window_size_kb": 1024, 00:10:56.289 "process_max_bandwidth_mb_sec": 0 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "bdev_iscsi_set_options", 00:10:56.289 "params": { 00:10:56.289 "timeout_sec": 30 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "bdev_nvme_set_options", 00:10:56.289 "params": { 00:10:56.289 "action_on_timeout": "none", 00:10:56.289 "timeout_us": 0, 00:10:56.289 "timeout_admin_us": 0, 00:10:56.289 "keep_alive_timeout_ms": 10000, 00:10:56.289 "arbitration_burst": 0, 00:10:56.289 "low_priority_weight": 0, 00:10:56.289 "medium_priority_weight": 0, 00:10:56.289 "high_priority_weight": 0, 00:10:56.289 "nvme_adminq_poll_period_us": 10000, 00:10:56.289 "nvme_ioq_poll_period_us": 0, 00:10:56.289 "io_queue_requests": 0, 00:10:56.289 "delay_cmd_submit": true, 00:10:56.289 "transport_retry_count": 4, 00:10:56.289 "bdev_retry_count": 3, 00:10:56.289 "transport_ack_timeout": 0, 00:10:56.289 "ctrlr_loss_timeout_sec": 0, 00:10:56.289 "reconnect_delay_sec": 0, 00:10:56.289 "fast_io_fail_timeout_sec": 0, 00:10:56.289 "disable_auto_failback": false, 00:10:56.289 "generate_uuids": false, 00:10:56.289 "transport_tos": 0, 00:10:56.289 "nvme_error_stat": false, 00:10:56.289 "rdma_srq_size": 0, 00:10:56.289 "io_path_stat": false, 00:10:56.289 "allow_accel_sequence": false, 00:10:56.289 "rdma_max_cq_size": 0, 00:10:56.289 "rdma_cm_event_timeout_ms": 0, 00:10:56.289 "dhchap_digests": [ 00:10:56.289 "sha256", 00:10:56.289 "sha384", 00:10:56.289 "sha512" 00:10:56.289 ], 00:10:56.289 "dhchap_dhgroups": [ 00:10:56.289 "null", 00:10:56.289 "ffdhe2048", 00:10:56.289 "ffdhe3072", 00:10:56.289 "ffdhe4096", 00:10:56.289 "ffdhe6144", 00:10:56.289 "ffdhe8192" 00:10:56.289 ] 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "bdev_nvme_set_hotplug", 00:10:56.289 "params": { 00:10:56.289 "period_us": 100000, 00:10:56.289 "enable": false 00:10:56.289 } 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "method": "bdev_wait_for_examine" 00:10:56.289 } 00:10:56.289 ] 00:10:56.289 }, 00:10:56.289 { 00:10:56.289 "subsystem": "scsi", 00:10:56.290 "config": null 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "scheduler", 00:10:56.290 "config": [ 00:10:56.290 { 00:10:56.290 "method": "framework_set_scheduler", 00:10:56.290 "params": { 00:10:56.290 "name": "static" 00:10:56.290 } 00:10:56.290 } 00:10:56.290 ] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "vhost_scsi", 00:10:56.290 "config": [] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "vhost_blk", 00:10:56.290 "config": [] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "ublk", 00:10:56.290 "config": [] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "nbd", 00:10:56.290 "config": [] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "nvmf", 00:10:56.290 "config": [ 00:10:56.290 { 00:10:56.290 "method": "nvmf_set_config", 00:10:56.290 "params": { 00:10:56.290 "discovery_filter": "match_any", 00:10:56.290 "admin_cmd_passthru": { 00:10:56.290 "identify_ctrlr": false 00:10:56.290 }, 00:10:56.290 "dhchap_digests": [ 00:10:56.290 "sha256", 00:10:56.290 "sha384", 00:10:56.290 "sha512" 00:10:56.290 ], 00:10:56.290 "dhchap_dhgroups": [ 00:10:56.290 "null", 00:10:56.290 "ffdhe2048", 00:10:56.290 "ffdhe3072", 00:10:56.290 "ffdhe4096", 00:10:56.290 "ffdhe6144", 00:10:56.290 "ffdhe8192" 00:10:56.290 ] 00:10:56.290 } 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "method": "nvmf_set_max_subsystems", 00:10:56.290 "params": { 00:10:56.290 "max_subsystems": 1024 00:10:56.290 } 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "method": "nvmf_set_crdt", 00:10:56.290 "params": { 00:10:56.290 "crdt1": 0, 00:10:56.290 "crdt2": 0, 00:10:56.290 "crdt3": 0 00:10:56.290 } 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "method": "nvmf_create_transport", 00:10:56.290 "params": { 00:10:56.290 "trtype": "TCP", 00:10:56.290 "max_queue_depth": 128, 00:10:56.290 "max_io_qpairs_per_ctrlr": 127, 00:10:56.290 "in_capsule_data_size": 4096, 00:10:56.290 "max_io_size": 131072, 00:10:56.290 "io_unit_size": 131072, 00:10:56.290 "max_aq_depth": 128, 00:10:56.290 "num_shared_buffers": 511, 00:10:56.290 "buf_cache_size": 4294967295, 00:10:56.290 "dif_insert_or_strip": false, 00:10:56.290 "zcopy": false, 00:10:56.290 "c2h_success": true, 00:10:56.290 "sock_priority": 0, 00:10:56.290 "abort_timeout_sec": 1, 00:10:56.290 "ack_timeout": 0, 00:10:56.290 "data_wr_pool_size": 0 00:10:56.290 } 00:10:56.290 } 00:10:56.290 ] 00:10:56.290 }, 00:10:56.290 { 00:10:56.290 "subsystem": "iscsi", 00:10:56.290 "config": [ 00:10:56.290 { 00:10:56.290 "method": "iscsi_set_options", 00:10:56.290 "params": { 00:10:56.290 "node_base": "iqn.2016-06.io.spdk", 00:10:56.290 "max_sessions": 128, 00:10:56.290 "max_connections_per_session": 2, 00:10:56.290 "max_queue_depth": 64, 00:10:56.290 "default_time2wait": 2, 00:10:56.290 "default_time2retain": 20, 00:10:56.290 "first_burst_length": 8192, 00:10:56.290 "immediate_data": true, 00:10:56.290 "allow_duplicated_isid": false, 00:10:56.290 "error_recovery_level": 0, 00:10:56.290 "nop_timeout": 60, 00:10:56.290 "nop_in_interval": 30, 00:10:56.290 "disable_chap": false, 00:10:56.290 "require_chap": false, 00:10:56.290 "mutual_chap": false, 00:10:56.290 "chap_group": 0, 00:10:56.290 "max_large_datain_per_connection": 64, 00:10:56.290 "max_r2t_per_connection": 4, 00:10:56.290 "pdu_pool_size": 36864, 00:10:56.290 "immediate_data_pool_size": 16384, 00:10:56.290 "data_out_pool_size": 2048 00:10:56.290 } 00:10:56.290 } 00:10:56.290 ] 00:10:56.290 } 00:10:56.290 ] 00:10:56.290 } 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58566 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58566 ']' 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58566 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58566 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.290 killing process with pid 58566 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58566' 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58566 00:10:56.290 13:30:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58566 00:10:58.818 13:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58622 00:10:58.818 13:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:58.818 13:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:04.081 13:30:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58622 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58622 ']' 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58622 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58622 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.082 killing process with pid 58622 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58622' 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58622 00:11:04.082 13:30:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58622 00:11:05.540 13:30:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:05.540 13:30:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:05.540 00:11:05.540 real 0m10.766s 00:11:05.540 user 0m10.438s 00:11:05.540 sys 0m0.760s 00:11:05.540 13:30:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.540 13:30:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:05.540 ************************************ 00:11:05.540 END TEST skip_rpc_with_json 00:11:05.540 ************************************ 00:11:05.799 13:30:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.799 ************************************ 00:11:05.799 START TEST skip_rpc_with_delay 00:11:05.799 ************************************ 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.799 [2024-11-20 13:30:57.716172] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.799 00:11:05.799 real 0m0.183s 00:11:05.799 user 0m0.099s 00:11:05.799 sys 0m0.082s 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.799 13:30:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:05.799 ************************************ 00:11:05.799 END TEST skip_rpc_with_delay 00:11:05.799 ************************************ 00:11:05.799 13:30:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:05.799 13:30:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:05.799 13:30:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.799 13:30:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.799 ************************************ 00:11:05.799 START TEST exit_on_failed_rpc_init 00:11:05.799 ************************************ 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58750 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58750 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58750 ']' 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.799 13:30:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:06.057 [2024-11-20 13:30:57.930220] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:06.057 [2024-11-20 13:30:57.930381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58750 ] 00:11:06.314 [2024-11-20 13:30:58.130712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.314 [2024-11-20 13:30:58.259830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:07.244 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.244 [2024-11-20 13:30:59.199013] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:07.244 [2024-11-20 13:30:59.199189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58768 ] 00:11:07.501 [2024-11-20 13:30:59.402448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.758 [2024-11-20 13:30:59.578476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.758 [2024-11-20 13:30:59.578627] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:07.758 [2024-11-20 13:30:59.578656] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:07.758 [2024-11-20 13:30:59.578689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58750 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58750 ']' 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58750 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58750 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.016 killing process with pid 58750 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58750' 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58750 00:11:08.016 13:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58750 00:11:10.544 00:11:10.544 real 0m4.165s 00:11:10.544 user 0m4.777s 00:11:10.544 sys 0m0.532s 00:11:10.544 13:31:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.544 ************************************ 00:11:10.544 END TEST exit_on_failed_rpc_init 00:11:10.544 ************************************ 00:11:10.544 13:31:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:10.544 13:31:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:10.544 00:11:10.544 real 0m22.600s 00:11:10.544 user 0m22.161s 00:11:10.544 sys 0m1.895s 00:11:10.544 13:31:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.544 13:31:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.544 ************************************ 00:11:10.544 END TEST skip_rpc 00:11:10.544 ************************************ 00:11:10.544 13:31:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:10.544 13:31:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.544 13:31:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.544 13:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:10.544 ************************************ 00:11:10.544 START TEST rpc_client 00:11:10.544 ************************************ 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:10.544 * Looking for test storage... 00:11:10.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.544 13:31:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.544 --rc genhtml_branch_coverage=1 00:11:10.544 --rc genhtml_function_coverage=1 00:11:10.544 --rc genhtml_legend=1 00:11:10.544 --rc geninfo_all_blocks=1 00:11:10.544 --rc geninfo_unexecuted_blocks=1 00:11:10.544 00:11:10.544 ' 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.544 --rc genhtml_branch_coverage=1 00:11:10.544 --rc genhtml_function_coverage=1 00:11:10.544 --rc genhtml_legend=1 00:11:10.544 --rc geninfo_all_blocks=1 00:11:10.544 --rc geninfo_unexecuted_blocks=1 00:11:10.544 00:11:10.544 ' 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.544 --rc genhtml_branch_coverage=1 00:11:10.544 --rc genhtml_function_coverage=1 00:11:10.544 --rc genhtml_legend=1 00:11:10.544 --rc geninfo_all_blocks=1 00:11:10.544 --rc geninfo_unexecuted_blocks=1 00:11:10.544 00:11:10.544 ' 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.544 --rc genhtml_branch_coverage=1 00:11:10.544 --rc genhtml_function_coverage=1 00:11:10.544 --rc genhtml_legend=1 00:11:10.544 --rc geninfo_all_blocks=1 00:11:10.544 --rc geninfo_unexecuted_blocks=1 00:11:10.544 00:11:10.544 ' 00:11:10.544 13:31:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:10.544 OK 00:11:10.544 13:31:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:10.544 00:11:10.544 real 0m0.237s 00:11:10.544 user 0m0.136s 00:11:10.544 sys 0m0.110s 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.544 13:31:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:10.544 ************************************ 00:11:10.544 END TEST rpc_client 00:11:10.544 ************************************ 00:11:10.544 13:31:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:10.544 13:31:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.544 13:31:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.544 13:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:10.544 ************************************ 00:11:10.544 START TEST json_config 00:11:10.544 ************************************ 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.544 13:31:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.544 13:31:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.544 13:31:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.544 13:31:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.544 13:31:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.544 13:31:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:10.544 13:31:02 json_config -- scripts/common.sh@345 -- # : 1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.544 13:31:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.544 13:31:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@353 -- # local d=1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.544 13:31:02 json_config -- scripts/common.sh@355 -- # echo 1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.544 13:31:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@353 -- # local d=2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.544 13:31:02 json_config -- scripts/common.sh@355 -- # echo 2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.544 13:31:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.544 13:31:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.544 13:31:02 json_config -- scripts/common.sh@368 -- # return 0 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.544 --rc genhtml_branch_coverage=1 00:11:10.544 --rc genhtml_function_coverage=1 00:11:10.544 --rc genhtml_legend=1 00:11:10.544 --rc geninfo_all_blocks=1 00:11:10.544 --rc geninfo_unexecuted_blocks=1 00:11:10.544 00:11:10.544 ' 00:11:10.544 13:31:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 13:31:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 13:31:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8d44fa66-3027-4e9a-96e5-d14ae0262833 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8d44fa66-3027-4e9a-96e5-d14ae0262833 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.545 13:31:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.545 13:31:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.545 13:31:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.545 13:31:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.545 13:31:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.545 13:31:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.545 13:31:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.545 13:31:02 json_config -- paths/export.sh@5 -- # export PATH 00:11:10.545 13:31:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@51 -- # : 0 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.545 13:31:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:10.545 WARNING: No tests are enabled so not running JSON configuration tests 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:10.545 13:31:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:10.545 00:11:10.545 real 0m0.201s 00:11:10.545 user 0m0.147s 00:11:10.545 sys 0m0.057s 00:11:10.545 13:31:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.545 13:31:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.545 ************************************ 00:11:10.545 END TEST json_config 00:11:10.545 ************************************ 00:11:10.804 13:31:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:10.804 13:31:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.804 13:31:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.804 13:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:10.804 ************************************ 00:11:10.804 START TEST json_config_extra_key 00:11:10.804 ************************************ 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.804 --rc genhtml_branch_coverage=1 00:11:10.804 --rc genhtml_function_coverage=1 00:11:10.804 --rc genhtml_legend=1 00:11:10.804 --rc geninfo_all_blocks=1 00:11:10.804 --rc geninfo_unexecuted_blocks=1 00:11:10.804 00:11:10.804 ' 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.804 --rc genhtml_branch_coverage=1 00:11:10.804 --rc genhtml_function_coverage=1 00:11:10.804 --rc genhtml_legend=1 00:11:10.804 --rc geninfo_all_blocks=1 00:11:10.804 --rc geninfo_unexecuted_blocks=1 00:11:10.804 00:11:10.804 ' 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.804 --rc genhtml_branch_coverage=1 00:11:10.804 --rc genhtml_function_coverage=1 00:11:10.804 --rc genhtml_legend=1 00:11:10.804 --rc geninfo_all_blocks=1 00:11:10.804 --rc geninfo_unexecuted_blocks=1 00:11:10.804 00:11:10.804 ' 00:11:10.804 13:31:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.804 --rc genhtml_branch_coverage=1 00:11:10.804 --rc genhtml_function_coverage=1 00:11:10.804 --rc genhtml_legend=1 00:11:10.804 --rc geninfo_all_blocks=1 00:11:10.804 --rc geninfo_unexecuted_blocks=1 00:11:10.804 00:11:10.804 ' 00:11:10.804 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8d44fa66-3027-4e9a-96e5-d14ae0262833 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8d44fa66-3027-4e9a-96e5-d14ae0262833 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.804 13:31:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.804 13:31:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.805 13:31:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.805 13:31:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.805 13:31:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.805 13:31:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:10.805 13:31:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.805 13:31:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:10.805 INFO: launching applications... 00:11:10.805 13:31:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:10.805 Waiting for target to run... 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58978 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:10.805 13:31:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58978 /var/tmp/spdk_tgt.sock 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58978 ']' 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.805 13:31:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:11.074 [2024-11-20 13:31:02.863913] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:11.074 [2024-11-20 13:31:02.864082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58978 ] 00:11:11.331 [2024-11-20 13:31:03.246006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.331 [2024-11-20 13:31:03.354525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.266 13:31:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.266 00:11:12.266 13:31:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:12.266 INFO: shutting down applications... 00:11:12.266 13:31:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:12.266 13:31:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58978 ]] 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58978 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:12.266 13:31:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:12.524 13:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:12.524 13:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:12.524 13:31:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:12.524 13:31:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:13.091 13:31:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:13.091 13:31:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:13.091 13:31:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:13.091 13:31:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:13.657 13:31:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:13.657 13:31:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:13.657 13:31:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:13.657 13:31:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:14.224 13:31:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:14.224 13:31:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:14.224 13:31:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:14.224 13:31:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58978 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:14.790 13:31:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:14.790 SPDK target shutdown done 00:11:14.790 Success 00:11:14.790 13:31:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:14.790 00:11:14.790 real 0m3.953s 00:11:14.790 user 0m3.903s 00:11:14.790 sys 0m0.454s 00:11:14.790 13:31:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.790 13:31:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:14.790 ************************************ 00:11:14.790 END TEST json_config_extra_key 00:11:14.791 ************************************ 00:11:14.791 13:31:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:14.791 13:31:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.791 13:31:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.791 13:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:14.791 ************************************ 00:11:14.791 START TEST alias_rpc 00:11:14.791 ************************************ 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:14.791 * Looking for test storage... 00:11:14.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.791 13:31:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.791 --rc genhtml_branch_coverage=1 00:11:14.791 --rc genhtml_function_coverage=1 00:11:14.791 --rc genhtml_legend=1 00:11:14.791 --rc geninfo_all_blocks=1 00:11:14.791 --rc geninfo_unexecuted_blocks=1 00:11:14.791 00:11:14.791 ' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.791 --rc genhtml_branch_coverage=1 00:11:14.791 --rc genhtml_function_coverage=1 00:11:14.791 --rc genhtml_legend=1 00:11:14.791 --rc geninfo_all_blocks=1 00:11:14.791 --rc geninfo_unexecuted_blocks=1 00:11:14.791 00:11:14.791 ' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.791 --rc genhtml_branch_coverage=1 00:11:14.791 --rc genhtml_function_coverage=1 00:11:14.791 --rc genhtml_legend=1 00:11:14.791 --rc geninfo_all_blocks=1 00:11:14.791 --rc geninfo_unexecuted_blocks=1 00:11:14.791 00:11:14.791 ' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.791 --rc genhtml_branch_coverage=1 00:11:14.791 --rc genhtml_function_coverage=1 00:11:14.791 --rc genhtml_legend=1 00:11:14.791 --rc geninfo_all_blocks=1 00:11:14.791 --rc geninfo_unexecuted_blocks=1 00:11:14.791 00:11:14.791 ' 00:11:14.791 13:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:14.791 13:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59083 00:11:14.791 13:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59083 00:11:14.791 13:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59083 ']' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.791 13:31:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.050 [2024-11-20 13:31:06.928270] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:15.050 [2024-11-20 13:31:06.928461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59083 ] 00:11:15.308 [2024-11-20 13:31:07.109465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.308 [2024-11-20 13:31:07.267856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.244 13:31:08 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.244 13:31:08 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:16.244 13:31:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:16.502 13:31:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59083 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59083 ']' 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59083 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59083 00:11:16.502 killing process with pid 59083 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59083' 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 59083 00:11:16.502 13:31:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 59083 00:11:19.089 ************************************ 00:11:19.089 END TEST alias_rpc 00:11:19.089 ************************************ 00:11:19.089 00:11:19.089 real 0m3.990s 00:11:19.089 user 0m4.355s 00:11:19.089 sys 0m0.508s 00:11:19.089 13:31:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.089 13:31:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.089 13:31:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:19.089 13:31:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:19.089 13:31:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.089 13:31:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.089 13:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.089 ************************************ 00:11:19.089 START TEST spdkcli_tcp 00:11:19.089 ************************************ 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:19.089 * Looking for test storage... 00:11:19.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.089 13:31:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.089 --rc genhtml_branch_coverage=1 00:11:19.089 --rc genhtml_function_coverage=1 00:11:19.089 --rc genhtml_legend=1 00:11:19.089 --rc geninfo_all_blocks=1 00:11:19.089 --rc geninfo_unexecuted_blocks=1 00:11:19.089 00:11:19.089 ' 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.089 --rc genhtml_branch_coverage=1 00:11:19.089 --rc genhtml_function_coverage=1 00:11:19.089 --rc genhtml_legend=1 00:11:19.089 --rc geninfo_all_blocks=1 00:11:19.089 --rc geninfo_unexecuted_blocks=1 00:11:19.089 00:11:19.089 ' 00:11:19.089 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.089 --rc genhtml_branch_coverage=1 00:11:19.089 --rc genhtml_function_coverage=1 00:11:19.089 --rc genhtml_legend=1 00:11:19.089 --rc geninfo_all_blocks=1 00:11:19.089 --rc geninfo_unexecuted_blocks=1 00:11:19.090 00:11:19.090 ' 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.090 --rc genhtml_branch_coverage=1 00:11:19.090 --rc genhtml_function_coverage=1 00:11:19.090 --rc genhtml_legend=1 00:11:19.090 --rc geninfo_all_blocks=1 00:11:19.090 --rc geninfo_unexecuted_blocks=1 00:11:19.090 00:11:19.090 ' 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59185 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:19.090 13:31:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59185 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59185 ']' 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.090 13:31:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.090 [2024-11-20 13:31:10.933840] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:19.090 [2024-11-20 13:31:10.934328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59185 ] 00:11:19.348 [2024-11-20 13:31:11.139917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:19.348 [2024-11-20 13:31:11.278695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.348 [2024-11-20 13:31:11.278705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.282 13:31:12 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.282 13:31:12 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:20.282 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59207 00:11:20.282 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:20.282 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:20.541 [ 00:11:20.541 "bdev_malloc_delete", 00:11:20.541 "bdev_malloc_create", 00:11:20.541 "bdev_null_resize", 00:11:20.541 "bdev_null_delete", 00:11:20.541 "bdev_null_create", 00:11:20.541 "bdev_nvme_cuse_unregister", 00:11:20.541 "bdev_nvme_cuse_register", 00:11:20.541 "bdev_opal_new_user", 00:11:20.541 "bdev_opal_set_lock_state", 00:11:20.541 "bdev_opal_delete", 00:11:20.541 "bdev_opal_get_info", 00:11:20.541 "bdev_opal_create", 00:11:20.541 "bdev_nvme_opal_revert", 00:11:20.541 "bdev_nvme_opal_init", 00:11:20.541 "bdev_nvme_send_cmd", 00:11:20.541 "bdev_nvme_set_keys", 00:11:20.541 "bdev_nvme_get_path_iostat", 00:11:20.541 "bdev_nvme_get_mdns_discovery_info", 00:11:20.541 "bdev_nvme_stop_mdns_discovery", 00:11:20.541 "bdev_nvme_start_mdns_discovery", 00:11:20.541 "bdev_nvme_set_multipath_policy", 00:11:20.541 "bdev_nvme_set_preferred_path", 00:11:20.541 "bdev_nvme_get_io_paths", 00:11:20.541 "bdev_nvme_remove_error_injection", 00:11:20.541 "bdev_nvme_add_error_injection", 00:11:20.541 "bdev_nvme_get_discovery_info", 00:11:20.541 "bdev_nvme_stop_discovery", 00:11:20.541 "bdev_nvme_start_discovery", 00:11:20.541 "bdev_nvme_get_controller_health_info", 00:11:20.541 "bdev_nvme_disable_controller", 00:11:20.541 "bdev_nvme_enable_controller", 00:11:20.541 "bdev_nvme_reset_controller", 00:11:20.541 "bdev_nvme_get_transport_statistics", 00:11:20.541 "bdev_nvme_apply_firmware", 00:11:20.541 "bdev_nvme_detach_controller", 00:11:20.541 "bdev_nvme_get_controllers", 00:11:20.541 "bdev_nvme_attach_controller", 00:11:20.541 "bdev_nvme_set_hotplug", 00:11:20.541 "bdev_nvme_set_options", 00:11:20.541 "bdev_passthru_delete", 00:11:20.541 "bdev_passthru_create", 00:11:20.541 "bdev_lvol_set_parent_bdev", 00:11:20.541 "bdev_lvol_set_parent", 00:11:20.541 "bdev_lvol_check_shallow_copy", 00:11:20.541 "bdev_lvol_start_shallow_copy", 00:11:20.541 "bdev_lvol_grow_lvstore", 00:11:20.541 "bdev_lvol_get_lvols", 00:11:20.541 "bdev_lvol_get_lvstores", 00:11:20.541 "bdev_lvol_delete", 00:11:20.541 "bdev_lvol_set_read_only", 00:11:20.541 "bdev_lvol_resize", 00:11:20.541 "bdev_lvol_decouple_parent", 00:11:20.541 "bdev_lvol_inflate", 00:11:20.541 "bdev_lvol_rename", 00:11:20.541 "bdev_lvol_clone_bdev", 00:11:20.541 "bdev_lvol_clone", 00:11:20.541 "bdev_lvol_snapshot", 00:11:20.541 "bdev_lvol_create", 00:11:20.541 "bdev_lvol_delete_lvstore", 00:11:20.541 "bdev_lvol_rename_lvstore", 00:11:20.541 "bdev_lvol_create_lvstore", 00:11:20.541 "bdev_raid_set_options", 00:11:20.541 "bdev_raid_remove_base_bdev", 00:11:20.541 "bdev_raid_add_base_bdev", 00:11:20.541 "bdev_raid_delete", 00:11:20.541 "bdev_raid_create", 00:11:20.541 "bdev_raid_get_bdevs", 00:11:20.541 "bdev_error_inject_error", 00:11:20.541 "bdev_error_delete", 00:11:20.541 "bdev_error_create", 00:11:20.541 "bdev_split_delete", 00:11:20.541 "bdev_split_create", 00:11:20.541 "bdev_delay_delete", 00:11:20.541 "bdev_delay_create", 00:11:20.541 "bdev_delay_update_latency", 00:11:20.541 "bdev_zone_block_delete", 00:11:20.541 "bdev_zone_block_create", 00:11:20.541 "blobfs_create", 00:11:20.541 "blobfs_detect", 00:11:20.541 "blobfs_set_cache_size", 00:11:20.541 "bdev_xnvme_delete", 00:11:20.541 "bdev_xnvme_create", 00:11:20.541 "bdev_aio_delete", 00:11:20.541 "bdev_aio_rescan", 00:11:20.541 "bdev_aio_create", 00:11:20.541 "bdev_ftl_set_property", 00:11:20.541 "bdev_ftl_get_properties", 00:11:20.541 "bdev_ftl_get_stats", 00:11:20.541 "bdev_ftl_unmap", 00:11:20.541 "bdev_ftl_unload", 00:11:20.541 "bdev_ftl_delete", 00:11:20.541 "bdev_ftl_load", 00:11:20.541 "bdev_ftl_create", 00:11:20.541 "bdev_virtio_attach_controller", 00:11:20.541 "bdev_virtio_scsi_get_devices", 00:11:20.541 "bdev_virtio_detach_controller", 00:11:20.541 "bdev_virtio_blk_set_hotplug", 00:11:20.541 "bdev_iscsi_delete", 00:11:20.541 "bdev_iscsi_create", 00:11:20.541 "bdev_iscsi_set_options", 00:11:20.541 "accel_error_inject_error", 00:11:20.541 "ioat_scan_accel_module", 00:11:20.541 "dsa_scan_accel_module", 00:11:20.541 "iaa_scan_accel_module", 00:11:20.541 "keyring_file_remove_key", 00:11:20.541 "keyring_file_add_key", 00:11:20.541 "keyring_linux_set_options", 00:11:20.541 "fsdev_aio_delete", 00:11:20.541 "fsdev_aio_create", 00:11:20.541 "iscsi_get_histogram", 00:11:20.541 "iscsi_enable_histogram", 00:11:20.541 "iscsi_set_options", 00:11:20.541 "iscsi_get_auth_groups", 00:11:20.541 "iscsi_auth_group_remove_secret", 00:11:20.541 "iscsi_auth_group_add_secret", 00:11:20.541 "iscsi_delete_auth_group", 00:11:20.541 "iscsi_create_auth_group", 00:11:20.541 "iscsi_set_discovery_auth", 00:11:20.541 "iscsi_get_options", 00:11:20.541 "iscsi_target_node_request_logout", 00:11:20.541 "iscsi_target_node_set_redirect", 00:11:20.541 "iscsi_target_node_set_auth", 00:11:20.541 "iscsi_target_node_add_lun", 00:11:20.541 "iscsi_get_stats", 00:11:20.541 "iscsi_get_connections", 00:11:20.541 "iscsi_portal_group_set_auth", 00:11:20.541 "iscsi_start_portal_group", 00:11:20.541 "iscsi_delete_portal_group", 00:11:20.541 "iscsi_create_portal_group", 00:11:20.541 "iscsi_get_portal_groups", 00:11:20.541 "iscsi_delete_target_node", 00:11:20.541 "iscsi_target_node_remove_pg_ig_maps", 00:11:20.541 "iscsi_target_node_add_pg_ig_maps", 00:11:20.541 "iscsi_create_target_node", 00:11:20.541 "iscsi_get_target_nodes", 00:11:20.541 "iscsi_delete_initiator_group", 00:11:20.541 "iscsi_initiator_group_remove_initiators", 00:11:20.541 "iscsi_initiator_group_add_initiators", 00:11:20.541 "iscsi_create_initiator_group", 00:11:20.541 "iscsi_get_initiator_groups", 00:11:20.541 "nvmf_set_crdt", 00:11:20.541 "nvmf_set_config", 00:11:20.541 "nvmf_set_max_subsystems", 00:11:20.541 "nvmf_stop_mdns_prr", 00:11:20.541 "nvmf_publish_mdns_prr", 00:11:20.541 "nvmf_subsystem_get_listeners", 00:11:20.541 "nvmf_subsystem_get_qpairs", 00:11:20.541 "nvmf_subsystem_get_controllers", 00:11:20.541 "nvmf_get_stats", 00:11:20.541 "nvmf_get_transports", 00:11:20.541 "nvmf_create_transport", 00:11:20.541 "nvmf_get_targets", 00:11:20.541 "nvmf_delete_target", 00:11:20.541 "nvmf_create_target", 00:11:20.541 "nvmf_subsystem_allow_any_host", 00:11:20.541 "nvmf_subsystem_set_keys", 00:11:20.541 "nvmf_subsystem_remove_host", 00:11:20.541 "nvmf_subsystem_add_host", 00:11:20.541 "nvmf_ns_remove_host", 00:11:20.541 "nvmf_ns_add_host", 00:11:20.541 "nvmf_subsystem_remove_ns", 00:11:20.541 "nvmf_subsystem_set_ns_ana_group", 00:11:20.541 "nvmf_subsystem_add_ns", 00:11:20.541 "nvmf_subsystem_listener_set_ana_state", 00:11:20.542 "nvmf_discovery_get_referrals", 00:11:20.542 "nvmf_discovery_remove_referral", 00:11:20.542 "nvmf_discovery_add_referral", 00:11:20.542 "nvmf_subsystem_remove_listener", 00:11:20.542 "nvmf_subsystem_add_listener", 00:11:20.542 "nvmf_delete_subsystem", 00:11:20.542 "nvmf_create_subsystem", 00:11:20.542 "nvmf_get_subsystems", 00:11:20.542 "env_dpdk_get_mem_stats", 00:11:20.542 "nbd_get_disks", 00:11:20.542 "nbd_stop_disk", 00:11:20.542 "nbd_start_disk", 00:11:20.542 "ublk_recover_disk", 00:11:20.542 "ublk_get_disks", 00:11:20.542 "ublk_stop_disk", 00:11:20.542 "ublk_start_disk", 00:11:20.542 "ublk_destroy_target", 00:11:20.542 "ublk_create_target", 00:11:20.542 "virtio_blk_create_transport", 00:11:20.542 "virtio_blk_get_transports", 00:11:20.542 "vhost_controller_set_coalescing", 00:11:20.542 "vhost_get_controllers", 00:11:20.542 "vhost_delete_controller", 00:11:20.542 "vhost_create_blk_controller", 00:11:20.542 "vhost_scsi_controller_remove_target", 00:11:20.542 "vhost_scsi_controller_add_target", 00:11:20.542 "vhost_start_scsi_controller", 00:11:20.542 "vhost_create_scsi_controller", 00:11:20.542 "thread_set_cpumask", 00:11:20.542 "scheduler_set_options", 00:11:20.542 "framework_get_governor", 00:11:20.542 "framework_get_scheduler", 00:11:20.542 "framework_set_scheduler", 00:11:20.542 "framework_get_reactors", 00:11:20.542 "thread_get_io_channels", 00:11:20.542 "thread_get_pollers", 00:11:20.542 "thread_get_stats", 00:11:20.542 "framework_monitor_context_switch", 00:11:20.542 "spdk_kill_instance", 00:11:20.542 "log_enable_timestamps", 00:11:20.542 "log_get_flags", 00:11:20.542 "log_clear_flag", 00:11:20.542 "log_set_flag", 00:11:20.542 "log_get_level", 00:11:20.542 "log_set_level", 00:11:20.542 "log_get_print_level", 00:11:20.542 "log_set_print_level", 00:11:20.542 "framework_enable_cpumask_locks", 00:11:20.542 "framework_disable_cpumask_locks", 00:11:20.542 "framework_wait_init", 00:11:20.542 "framework_start_init", 00:11:20.542 "scsi_get_devices", 00:11:20.542 "bdev_get_histogram", 00:11:20.542 "bdev_enable_histogram", 00:11:20.542 "bdev_set_qos_limit", 00:11:20.542 "bdev_set_qd_sampling_period", 00:11:20.542 "bdev_get_bdevs", 00:11:20.542 "bdev_reset_iostat", 00:11:20.542 "bdev_get_iostat", 00:11:20.542 "bdev_examine", 00:11:20.542 "bdev_wait_for_examine", 00:11:20.542 "bdev_set_options", 00:11:20.542 "accel_get_stats", 00:11:20.542 "accel_set_options", 00:11:20.542 "accel_set_driver", 00:11:20.542 "accel_crypto_key_destroy", 00:11:20.542 "accel_crypto_keys_get", 00:11:20.542 "accel_crypto_key_create", 00:11:20.542 "accel_assign_opc", 00:11:20.542 "accel_get_module_info", 00:11:20.542 "accel_get_opc_assignments", 00:11:20.542 "vmd_rescan", 00:11:20.542 "vmd_remove_device", 00:11:20.542 "vmd_enable", 00:11:20.542 "sock_get_default_impl", 00:11:20.542 "sock_set_default_impl", 00:11:20.542 "sock_impl_set_options", 00:11:20.542 "sock_impl_get_options", 00:11:20.542 "iobuf_get_stats", 00:11:20.542 "iobuf_set_options", 00:11:20.542 "keyring_get_keys", 00:11:20.542 "framework_get_pci_devices", 00:11:20.542 "framework_get_config", 00:11:20.542 "framework_get_subsystems", 00:11:20.542 "fsdev_set_opts", 00:11:20.542 "fsdev_get_opts", 00:11:20.542 "trace_get_info", 00:11:20.542 "trace_get_tpoint_group_mask", 00:11:20.542 "trace_disable_tpoint_group", 00:11:20.542 "trace_enable_tpoint_group", 00:11:20.542 "trace_clear_tpoint_mask", 00:11:20.542 "trace_set_tpoint_mask", 00:11:20.542 "notify_get_notifications", 00:11:20.542 "notify_get_types", 00:11:20.542 "spdk_get_version", 00:11:20.542 "rpc_get_methods" 00:11:20.542 ] 00:11:20.542 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.542 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:20.542 13:31:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59185 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59185 ']' 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59185 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.542 13:31:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59185 00:11:20.800 killing process with pid 59185 00:11:20.800 13:31:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.800 13:31:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.800 13:31:12 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59185' 00:11:20.800 13:31:12 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59185 00:11:20.800 13:31:12 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59185 00:11:22.699 ************************************ 00:11:22.699 END TEST spdkcli_tcp 00:11:22.699 ************************************ 00:11:22.699 00:11:22.699 real 0m4.076s 00:11:22.699 user 0m7.617s 00:11:22.699 sys 0m0.574s 00:11:22.699 13:31:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.699 13:31:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.699 13:31:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:22.699 13:31:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.699 13:31:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.699 13:31:14 -- common/autotest_common.sh@10 -- # set +x 00:11:22.957 ************************************ 00:11:22.957 START TEST dpdk_mem_utility 00:11:22.957 ************************************ 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:22.957 * Looking for test storage... 00:11:22.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:22.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.957 13:31:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.957 --rc genhtml_branch_coverage=1 00:11:22.957 --rc genhtml_function_coverage=1 00:11:22.957 --rc genhtml_legend=1 00:11:22.957 --rc geninfo_all_blocks=1 00:11:22.957 --rc geninfo_unexecuted_blocks=1 00:11:22.957 00:11:22.957 ' 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.957 --rc genhtml_branch_coverage=1 00:11:22.957 --rc genhtml_function_coverage=1 00:11:22.957 --rc genhtml_legend=1 00:11:22.957 --rc geninfo_all_blocks=1 00:11:22.957 --rc geninfo_unexecuted_blocks=1 00:11:22.957 00:11:22.957 ' 00:11:22.957 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.958 --rc genhtml_branch_coverage=1 00:11:22.958 --rc genhtml_function_coverage=1 00:11:22.958 --rc genhtml_legend=1 00:11:22.958 --rc geninfo_all_blocks=1 00:11:22.958 --rc geninfo_unexecuted_blocks=1 00:11:22.958 00:11:22.958 ' 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.958 --rc genhtml_branch_coverage=1 00:11:22.958 --rc genhtml_function_coverage=1 00:11:22.958 --rc genhtml_legend=1 00:11:22.958 --rc geninfo_all_blocks=1 00:11:22.958 --rc geninfo_unexecuted_blocks=1 00:11:22.958 00:11:22.958 ' 00:11:22.958 13:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:22.958 13:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59307 00:11:22.958 13:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:22.958 13:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59307 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.958 13:31:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:23.216 [2024-11-20 13:31:15.062391] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:23.216 [2024-11-20 13:31:15.062824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:11:23.491 [2024-11-20 13:31:15.255598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.491 [2024-11-20 13:31:15.358934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.436 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.436 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:24.436 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:24.436 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:24.436 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.436 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:24.436 { 00:11:24.436 "filename": "/tmp/spdk_mem_dump.txt" 00:11:24.436 } 00:11:24.436 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.436 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:24.436 DPDK memory size 816.000000 MiB in 1 heap(s) 00:11:24.436 1 heaps totaling size 816.000000 MiB 00:11:24.436 size: 816.000000 MiB heap id: 0 00:11:24.436 end heaps---------- 00:11:24.436 9 mempools totaling size 595.772034 MiB 00:11:24.436 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:24.436 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:24.436 size: 92.545471 MiB name: bdev_io_59307 00:11:24.436 size: 50.003479 MiB name: msgpool_59307 00:11:24.436 size: 36.509338 MiB name: fsdev_io_59307 00:11:24.436 size: 21.763794 MiB name: PDU_Pool 00:11:24.436 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:24.436 size: 4.133484 MiB name: evtpool_59307 00:11:24.436 size: 0.026123 MiB name: Session_Pool 00:11:24.436 end mempools------- 00:11:24.436 6 memzones totaling size 4.142822 MiB 00:11:24.436 size: 1.000366 MiB name: RG_ring_0_59307 00:11:24.436 size: 1.000366 MiB name: RG_ring_1_59307 00:11:24.436 size: 1.000366 MiB name: RG_ring_4_59307 00:11:24.436 size: 1.000366 MiB name: RG_ring_5_59307 00:11:24.436 size: 0.125366 MiB name: RG_ring_2_59307 00:11:24.436 size: 0.015991 MiB name: RG_ring_3_59307 00:11:24.436 end memzones------- 00:11:24.436 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:24.436 heap id: 0 total size: 816.000000 MiB number of busy elements: 310 number of free elements: 18 00:11:24.436 list of free elements. size: 16.792603 MiB 00:11:24.436 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:24.436 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:24.436 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:24.436 element at address: 0x200018d00040 with size: 0.999939 MiB 00:11:24.436 element at address: 0x200019100040 with size: 0.999939 MiB 00:11:24.436 element at address: 0x200019200000 with size: 0.999084 MiB 00:11:24.436 element at address: 0x200031e00000 with size: 0.994324 MiB 00:11:24.436 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:24.436 element at address: 0x200018a00000 with size: 0.959656 MiB 00:11:24.436 element at address: 0x200019500040 with size: 0.936401 MiB 00:11:24.436 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:24.436 element at address: 0x20001ac00000 with size: 0.562927 MiB 00:11:24.436 element at address: 0x200000c00000 with size: 0.490173 MiB 00:11:24.436 element at address: 0x200018e00000 with size: 0.487976 MiB 00:11:24.436 element at address: 0x200019600000 with size: 0.485413 MiB 00:11:24.436 element at address: 0x200012c00000 with size: 0.443481 MiB 00:11:24.436 element at address: 0x200028000000 with size: 0.390442 MiB 00:11:24.436 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:24.436 list of standard malloc elements. size: 199.286499 MiB 00:11:24.436 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:24.436 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:24.436 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:11:24.436 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:11:24.436 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:24.436 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:24.436 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:11:24.436 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:24.436 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:24.436 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:11:24.436 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:24.436 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:24.436 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71880 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71980 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c72080 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012c72180 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:11:24.437 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:11:24.438 element at address: 0x200028063f40 with size: 0.000244 MiB 00:11:24.438 element at address: 0x200028064040 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806af80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b080 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b180 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b280 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b380 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b480 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b580 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b680 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b780 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b880 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806b980 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806be80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c080 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c180 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c280 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c380 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c480 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c580 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c680 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c780 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c880 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806c980 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d080 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d180 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d280 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d380 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d480 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d580 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d680 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d780 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d880 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806d980 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806da80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806db80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806de80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806df80 with size: 0.000244 MiB 00:11:24.438 element at address: 0x20002806e080 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e180 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e280 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e380 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e480 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e580 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e680 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e780 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e880 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806e980 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f080 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f180 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f280 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f380 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f480 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f580 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f680 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f780 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f880 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806f980 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:11:24.439 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:11:24.439 list of memzone associated elements. size: 599.920898 MiB 00:11:24.439 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:11:24.439 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:24.439 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:11:24.439 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:24.439 element at address: 0x200012df4740 with size: 92.045105 MiB 00:11:24.439 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59307_0 00:11:24.439 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:24.439 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59307_0 00:11:24.439 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:24.439 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59307_0 00:11:24.439 element at address: 0x2000197be900 with size: 20.255615 MiB 00:11:24.439 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:24.439 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:11:24.439 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:24.439 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:24.439 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59307_0 00:11:24.439 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:24.439 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59307 00:11:24.439 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:24.439 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59307 00:11:24.439 element at address: 0x200018efde00 with size: 1.008179 MiB 00:11:24.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:24.439 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:11:24.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:24.439 element at address: 0x200018afde00 with size: 1.008179 MiB 00:11:24.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:24.439 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:11:24.439 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:24.439 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:24.439 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59307 00:11:24.439 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:24.439 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59307 00:11:24.439 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:11:24.439 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59307 00:11:24.439 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:11:24.439 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59307 00:11:24.439 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:24.439 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59307 00:11:24.439 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:24.439 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59307 00:11:24.439 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:11:24.439 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:24.439 element at address: 0x200012c72280 with size: 0.500549 MiB 00:11:24.439 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:24.439 element at address: 0x20001967c440 with size: 0.250549 MiB 00:11:24.439 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:24.439 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:24.439 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59307 00:11:24.439 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:24.439 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59307 00:11:24.439 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:11:24.439 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:24.439 element at address: 0x200028064140 with size: 0.023804 MiB 00:11:24.439 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:24.439 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:24.439 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59307 00:11:24.439 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:11:24.439 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:24.439 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:24.439 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59307 00:11:24.439 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:24.439 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59307 00:11:24.439 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:24.439 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59307 00:11:24.439 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:11:24.439 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:24.439 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:24.439 13:31:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59307 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59307 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:11:24.439 killing process with pid 59307 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59307 00:11:24.439 13:31:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59307 00:11:26.972 00:11:26.972 real 0m3.836s 00:11:26.972 user 0m3.979s 00:11:26.972 sys 0m0.535s 00:11:26.972 13:31:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.972 ************************************ 00:11:26.972 END TEST dpdk_mem_utility 00:11:26.972 ************************************ 00:11:26.972 13:31:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:26.972 13:31:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:26.972 13:31:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.972 13:31:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.972 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:11:26.972 ************************************ 00:11:26.972 START TEST event 00:11:26.972 ************************************ 00:11:26.972 13:31:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:26.972 * Looking for test storage... 00:11:26.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:26.972 13:31:18 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.972 13:31:18 event -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.972 13:31:18 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.973 13:31:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.973 13:31:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.973 13:31:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.973 13:31:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.973 13:31:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.973 13:31:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.973 13:31:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.973 13:31:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.973 13:31:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.973 13:31:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.973 13:31:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.973 13:31:18 event -- scripts/common.sh@344 -- # case "$op" in 00:11:26.973 13:31:18 event -- scripts/common.sh@345 -- # : 1 00:11:26.973 13:31:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.973 13:31:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.973 13:31:18 event -- scripts/common.sh@365 -- # decimal 1 00:11:26.973 13:31:18 event -- scripts/common.sh@353 -- # local d=1 00:11:26.973 13:31:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.973 13:31:18 event -- scripts/common.sh@355 -- # echo 1 00:11:26.973 13:31:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.973 13:31:18 event -- scripts/common.sh@366 -- # decimal 2 00:11:26.973 13:31:18 event -- scripts/common.sh@353 -- # local d=2 00:11:26.973 13:31:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.973 13:31:18 event -- scripts/common.sh@355 -- # echo 2 00:11:26.973 13:31:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.973 13:31:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.973 13:31:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.973 13:31:18 event -- scripts/common.sh@368 -- # return 0 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.973 --rc genhtml_branch_coverage=1 00:11:26.973 --rc genhtml_function_coverage=1 00:11:26.973 --rc genhtml_legend=1 00:11:26.973 --rc geninfo_all_blocks=1 00:11:26.973 --rc geninfo_unexecuted_blocks=1 00:11:26.973 00:11:26.973 ' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.973 --rc genhtml_branch_coverage=1 00:11:26.973 --rc genhtml_function_coverage=1 00:11:26.973 --rc genhtml_legend=1 00:11:26.973 --rc geninfo_all_blocks=1 00:11:26.973 --rc geninfo_unexecuted_blocks=1 00:11:26.973 00:11:26.973 ' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.973 --rc genhtml_branch_coverage=1 00:11:26.973 --rc genhtml_function_coverage=1 00:11:26.973 --rc genhtml_legend=1 00:11:26.973 --rc geninfo_all_blocks=1 00:11:26.973 --rc geninfo_unexecuted_blocks=1 00:11:26.973 00:11:26.973 ' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.973 --rc genhtml_branch_coverage=1 00:11:26.973 --rc genhtml_function_coverage=1 00:11:26.973 --rc genhtml_legend=1 00:11:26.973 --rc geninfo_all_blocks=1 00:11:26.973 --rc geninfo_unexecuted_blocks=1 00:11:26.973 00:11:26.973 ' 00:11:26.973 13:31:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:26.973 13:31:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:26.973 13:31:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:26.973 13:31:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.973 13:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:11:26.973 ************************************ 00:11:26.973 START TEST event_perf 00:11:26.973 ************************************ 00:11:26.973 13:31:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:26.973 Running I/O for 1 seconds...[2024-11-20 13:31:18.839846] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:26.973 [2024-11-20 13:31:18.840118] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59417 ] 00:11:27.232 [2024-11-20 13:31:19.015708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.232 [2024-11-20 13:31:19.130632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.232 [2024-11-20 13:31:19.130784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.232 [2024-11-20 13:31:19.130934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.232 Running I/O for 1 seconds...[2024-11-20 13:31:19.130953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.606 00:11:28.606 lcore 0: 182327 00:11:28.606 lcore 1: 182318 00:11:28.606 lcore 2: 182321 00:11:28.606 lcore 3: 182324 00:11:28.606 done. 00:11:28.606 00:11:28.606 real 0m1.642s 00:11:28.606 user 0m4.405s 00:11:28.606 sys 0m0.108s 00:11:28.606 13:31:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.606 13:31:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 ************************************ 00:11:28.606 END TEST event_perf 00:11:28.606 ************************************ 00:11:28.606 13:31:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:28.606 13:31:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.606 13:31:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.606 13:31:20 event -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 ************************************ 00:11:28.606 START TEST event_reactor 00:11:28.606 ************************************ 00:11:28.606 13:31:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:28.606 [2024-11-20 13:31:20.534796] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:28.606 [2024-11-20 13:31:20.535176] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59456 ] 00:11:28.865 [2024-11-20 13:31:20.708552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.865 [2024-11-20 13:31:20.813300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.291 test_start 00:11:30.291 oneshot 00:11:30.291 tick 100 00:11:30.291 tick 100 00:11:30.291 tick 250 00:11:30.291 tick 100 00:11:30.291 tick 100 00:11:30.291 tick 100 00:11:30.291 tick 250 00:11:30.291 tick 500 00:11:30.291 tick 100 00:11:30.291 tick 100 00:11:30.291 tick 250 00:11:30.291 tick 100 00:11:30.291 tick 100 00:11:30.291 test_end 00:11:30.291 ************************************ 00:11:30.291 END TEST event_reactor 00:11:30.291 ************************************ 00:11:30.291 00:11:30.291 real 0m1.555s 00:11:30.291 user 0m1.362s 00:11:30.291 sys 0m0.083s 00:11:30.291 13:31:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.291 13:31:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:30.291 13:31:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:30.291 13:31:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.291 13:31:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.291 13:31:22 event -- common/autotest_common.sh@10 -- # set +x 00:11:30.291 ************************************ 00:11:30.291 START TEST event_reactor_perf 00:11:30.291 ************************************ 00:11:30.291 13:31:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:30.291 [2024-11-20 13:31:22.131847] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:30.291 [2024-11-20 13:31:22.132006] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:11:30.291 [2024-11-20 13:31:22.305247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.559 [2024-11-20 13:31:22.411509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.934 test_start 00:11:31.934 test_end 00:11:31.934 Performance: 270249 events per second 00:11:31.934 00:11:31.934 real 0m1.540s 00:11:31.934 user 0m1.357s 00:11:31.934 sys 0m0.073s 00:11:31.934 13:31:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.934 ************************************ 00:11:31.934 END TEST event_reactor_perf 00:11:31.934 ************************************ 00:11:31.934 13:31:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:31.934 13:31:23 event -- event/event.sh@49 -- # uname -s 00:11:31.934 13:31:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:31.934 13:31:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:31.934 13:31:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.934 13:31:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.934 13:31:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:31.934 ************************************ 00:11:31.934 START TEST event_scheduler 00:11:31.934 ************************************ 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:31.934 * Looking for test storage... 00:11:31.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:31.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.934 13:31:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.934 13:31:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:31.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.934 --rc genhtml_branch_coverage=1 00:11:31.934 --rc genhtml_function_coverage=1 00:11:31.934 --rc genhtml_legend=1 00:11:31.934 --rc geninfo_all_blocks=1 00:11:31.934 --rc geninfo_unexecuted_blocks=1 00:11:31.934 00:11:31.935 ' 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.935 --rc genhtml_branch_coverage=1 00:11:31.935 --rc genhtml_function_coverage=1 00:11:31.935 --rc genhtml_legend=1 00:11:31.935 --rc geninfo_all_blocks=1 00:11:31.935 --rc geninfo_unexecuted_blocks=1 00:11:31.935 00:11:31.935 ' 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.935 --rc genhtml_branch_coverage=1 00:11:31.935 --rc genhtml_function_coverage=1 00:11:31.935 --rc genhtml_legend=1 00:11:31.935 --rc geninfo_all_blocks=1 00:11:31.935 --rc geninfo_unexecuted_blocks=1 00:11:31.935 00:11:31.935 ' 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.935 --rc genhtml_branch_coverage=1 00:11:31.935 --rc genhtml_function_coverage=1 00:11:31.935 --rc genhtml_legend=1 00:11:31.935 --rc geninfo_all_blocks=1 00:11:31.935 --rc geninfo_unexecuted_blocks=1 00:11:31.935 00:11:31.935 ' 00:11:31.935 13:31:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:31.935 13:31:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59569 00:11:31.935 13:31:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:31.935 13:31:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:31.935 13:31:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59569 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59569 ']' 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.935 13:31:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:32.194 [2024-11-20 13:31:23.978331] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:32.194 [2024-11-20 13:31:23.978814] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:11:32.194 [2024-11-20 13:31:24.165922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.452 [2024-11-20 13:31:24.273640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.452 [2024-11-20 13:31:24.273704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.452 [2024-11-20 13:31:24.273847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.452 [2024-11-20 13:31:24.273858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.386 13:31:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.386 13:31:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:33.386 13:31:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:33.386 13:31:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.386 13:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:33.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:33.386 POWER: Cannot set governor of lcore 0 to userspace 00:11:33.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:33.386 POWER: Cannot set governor of lcore 0 to performance 00:11:33.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:33.386 POWER: Cannot set governor of lcore 0 to userspace 00:11:33.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:33.386 POWER: Cannot set governor of lcore 0 to userspace 00:11:33.386 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:33.386 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:33.386 POWER: Unable to set Power Management Environment for lcore 0 00:11:33.386 [2024-11-20 13:31:25.081937] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:33.386 [2024-11-20 13:31:25.081987] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:33.386 [2024-11-20 13:31:25.082013] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:33.386 [2024-11-20 13:31:25.082049] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:33.387 [2024-11-20 13:31:25.082070] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:33.387 [2024-11-20 13:31:25.082092] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.387 13:31:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:33.387 [2024-11-20 13:31:25.372960] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.387 13:31:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:33.387 ************************************ 00:11:33.387 START TEST scheduler_create_thread 00:11:33.387 ************************************ 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.387 2 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.387 3 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.387 4 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.387 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 5 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 6 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 7 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 8 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 9 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 10 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.646 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:34.212 ************************************ 00:11:34.212 END TEST scheduler_create_thread 00:11:34.212 ************************************ 00:11:34.212 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.212 00:11:34.212 real 0m0.600s 00:11:34.212 user 0m0.014s 00:11:34.212 sys 0m0.005s 00:11:34.212 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.212 13:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:34.212 13:31:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:34.212 13:31:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59569 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59569 ']' 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59569 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59569 00:11:34.212 killing process with pid 59569 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59569' 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59569 00:11:34.212 13:31:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59569 00:11:34.471 [2024-11-20 13:31:26.464028] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:35.847 00:11:35.847 real 0m3.916s 00:11:35.847 user 0m8.250s 00:11:35.847 sys 0m0.447s 00:11:35.847 13:31:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.847 ************************************ 00:11:35.847 END TEST event_scheduler 00:11:35.847 ************************************ 00:11:35.847 13:31:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 13:31:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:35.847 13:31:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:35.847 13:31:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.847 13:31:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.847 13:31:27 event -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 ************************************ 00:11:35.847 START TEST app_repeat 00:11:35.847 ************************************ 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:35.847 Process app_repeat pid: 59660 00:11:35.847 spdk_app_start Round 0 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59660 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59660' 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:35.847 13:31:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59660 /var/tmp/spdk-nbd.sock 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.847 13:31:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 [2024-11-20 13:31:27.700839] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:35.847 [2024-11-20 13:31:27.701002] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:11:35.847 [2024-11-20 13:31:27.877549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.106 [2024-11-20 13:31:27.993719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.106 [2024-11-20 13:31:27.993736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.158 13:31:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.158 13:31:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:37.158 13:31:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:37.416 Malloc0 00:11:37.416 13:31:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:37.674 Malloc1 00:11:37.933 13:31:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:37.933 13:31:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:38.192 /dev/nbd0 00:11:38.192 13:31:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:38.192 13:31:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:38.192 1+0 records in 00:11:38.192 1+0 records out 00:11:38.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396274 s, 10.3 MB/s 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:38.192 13:31:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:38.192 13:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.192 13:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.192 13:31:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:38.450 /dev/nbd1 00:11:38.450 13:31:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:38.450 13:31:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:38.450 13:31:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:38.451 1+0 records in 00:11:38.451 1+0 records out 00:11:38.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412669 s, 9.9 MB/s 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:38.451 13:31:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:38.451 13:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.451 13:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.451 13:31:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:38.451 13:31:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.451 13:31:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:39.017 { 00:11:39.017 "nbd_device": "/dev/nbd0", 00:11:39.017 "bdev_name": "Malloc0" 00:11:39.017 }, 00:11:39.017 { 00:11:39.017 "nbd_device": "/dev/nbd1", 00:11:39.017 "bdev_name": "Malloc1" 00:11:39.017 } 00:11:39.017 ]' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:39.017 { 00:11:39.017 "nbd_device": "/dev/nbd0", 00:11:39.017 "bdev_name": "Malloc0" 00:11:39.017 }, 00:11:39.017 { 00:11:39.017 "nbd_device": "/dev/nbd1", 00:11:39.017 "bdev_name": "Malloc1" 00:11:39.017 } 00:11:39.017 ]' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:39.017 /dev/nbd1' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:39.017 /dev/nbd1' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:39.017 256+0 records in 00:11:39.017 256+0 records out 00:11:39.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683089 s, 154 MB/s 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:39.017 256+0 records in 00:11:39.017 256+0 records out 00:11:39.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318778 s, 32.9 MB/s 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:39.017 256+0 records in 00:11:39.017 256+0 records out 00:11:39.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342558 s, 30.6 MB/s 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.017 13:31:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:39.275 13:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.533 13:31:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:39.790 13:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:39.790 13:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:39.790 13:31:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:39.790 13:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.790 13:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.791 13:31:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:40.049 13:31:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:40.049 13:31:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:40.049 13:31:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:40.049 13:31:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:40.049 13:31:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:40.618 13:31:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:41.994 [2024-11-20 13:31:33.651374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.994 [2024-11-20 13:31:33.759945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.994 [2024-11-20 13:31:33.759950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.994 [2024-11-20 13:31:33.933505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:41.994 [2024-11-20 13:31:33.933609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:43.971 spdk_app_start Round 1 00:11:43.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:43.971 13:31:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:43.971 13:31:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:43.971 13:31:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59660 /var/tmp/spdk-nbd.sock 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.971 13:31:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:43.971 13:31:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:44.230 Malloc0 00:11:44.488 13:31:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:44.746 Malloc1 00:11:44.746 13:31:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:44.746 13:31:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:45.004 /dev/nbd0 00:11:45.004 13:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:45.004 13:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:45.004 1+0 records in 00:11:45.004 1+0 records out 00:11:45.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670497 s, 6.1 MB/s 00:11:45.004 13:31:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.004 13:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:45.004 13:31:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.004 13:31:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:45.004 13:31:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:45.004 13:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.004 13:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:45.004 13:31:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:45.572 /dev/nbd1 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:45.572 1+0 records in 00:11:45.572 1+0 records out 00:11:45.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412861 s, 9.9 MB/s 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:45.572 13:31:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.572 13:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:45.831 { 00:11:45.831 "nbd_device": "/dev/nbd0", 00:11:45.831 "bdev_name": "Malloc0" 00:11:45.831 }, 00:11:45.831 { 00:11:45.831 "nbd_device": "/dev/nbd1", 00:11:45.831 "bdev_name": "Malloc1" 00:11:45.831 } 00:11:45.831 ]' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:45.831 { 00:11:45.831 "nbd_device": "/dev/nbd0", 00:11:45.831 "bdev_name": "Malloc0" 00:11:45.831 }, 00:11:45.831 { 00:11:45.831 "nbd_device": "/dev/nbd1", 00:11:45.831 "bdev_name": "Malloc1" 00:11:45.831 } 00:11:45.831 ]' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:45.831 /dev/nbd1' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:45.831 /dev/nbd1' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:45.831 256+0 records in 00:11:45.831 256+0 records out 00:11:45.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00804858 s, 130 MB/s 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:45.831 256+0 records in 00:11:45.831 256+0 records out 00:11:45.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362501 s, 28.9 MB/s 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:45.831 256+0 records in 00:11:45.831 256+0 records out 00:11:45.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0410113 s, 25.6 MB/s 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.831 13:31:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.398 13:31:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.656 13:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:46.914 13:31:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:46.914 13:31:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:47.480 13:31:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:48.451 [2024-11-20 13:31:40.479801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:48.709 [2024-11-20 13:31:40.610282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.709 [2024-11-20 13:31:40.610307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.968 [2024-11-20 13:31:40.789480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:48.968 [2024-11-20 13:31:40.789589] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:50.340 13:31:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:50.340 13:31:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:50.340 spdk_app_start Round 2 00:11:50.340 13:31:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59660 /var/tmp/spdk-nbd.sock 00:11:50.340 13:31:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:11:50.340 13:31:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:50.340 13:31:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:50.341 13:31:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:50.341 13:31:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.341 13:31:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:50.906 13:31:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.906 13:31:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:50.906 13:31:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:51.164 Malloc0 00:11:51.164 13:31:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:51.422 Malloc1 00:11:51.422 13:31:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.422 13:31:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:51.680 /dev/nbd0 00:11:51.680 13:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:51.680 13:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:51.680 1+0 records in 00:11:51.680 1+0 records out 00:11:51.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291434 s, 14.1 MB/s 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.680 13:31:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:51.680 13:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.680 13:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.680 13:31:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:52.245 /dev/nbd1 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:52.245 1+0 records in 00:11:52.245 1+0 records out 00:11:52.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003057 s, 13.4 MB/s 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.245 13:31:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.245 13:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.502 13:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:52.502 { 00:11:52.502 "nbd_device": "/dev/nbd0", 00:11:52.502 "bdev_name": "Malloc0" 00:11:52.502 }, 00:11:52.502 { 00:11:52.502 "nbd_device": "/dev/nbd1", 00:11:52.502 "bdev_name": "Malloc1" 00:11:52.503 } 00:11:52.503 ]' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:52.503 { 00:11:52.503 "nbd_device": "/dev/nbd0", 00:11:52.503 "bdev_name": "Malloc0" 00:11:52.503 }, 00:11:52.503 { 00:11:52.503 "nbd_device": "/dev/nbd1", 00:11:52.503 "bdev_name": "Malloc1" 00:11:52.503 } 00:11:52.503 ]' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:52.503 /dev/nbd1' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:52.503 /dev/nbd1' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:52.503 256+0 records in 00:11:52.503 256+0 records out 00:11:52.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687595 s, 152 MB/s 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:52.503 256+0 records in 00:11:52.503 256+0 records out 00:11:52.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314274 s, 33.4 MB/s 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:52.503 256+0 records in 00:11:52.503 256+0 records out 00:11:52.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302129 s, 34.7 MB/s 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.503 13:31:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.760 13:31:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.018 13:31:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.276 13:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.534 13:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:53.534 13:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:53.534 13:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:53.534 13:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:53.534 13:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:53.535 13:31:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:53.535 13:31:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:54.100 13:31:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:55.040 [2024-11-20 13:31:46.933616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:55.040 [2024-11-20 13:31:47.047075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.040 [2024-11-20 13:31:47.047090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.298 [2024-11-20 13:31:47.218865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:55.298 [2024-11-20 13:31:47.218998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:57.198 13:31:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59660 /var/tmp/spdk-nbd.sock 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.198 13:31:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:57.198 13:31:49 event.app_repeat -- event/event.sh@39 -- # killprocess 59660 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59660 ']' 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59660 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59660 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.198 killing process with pid 59660 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59660' 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59660 00:11:57.198 13:31:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59660 00:11:58.131 spdk_app_start is called in Round 0. 00:11:58.131 Shutdown signal received, stop current app iteration 00:11:58.131 Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 reinitialization... 00:11:58.131 spdk_app_start is called in Round 1. 00:11:58.131 Shutdown signal received, stop current app iteration 00:11:58.131 Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 reinitialization... 00:11:58.131 spdk_app_start is called in Round 2. 00:11:58.131 Shutdown signal received, stop current app iteration 00:11:58.131 Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 reinitialization... 00:11:58.131 spdk_app_start is called in Round 3. 00:11:58.131 Shutdown signal received, stop current app iteration 00:11:58.131 13:31:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:58.131 13:31:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:58.131 00:11:58.131 real 0m22.488s 00:11:58.131 user 0m50.694s 00:11:58.131 sys 0m2.866s 00:11:58.131 13:31:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.131 ************************************ 00:11:58.131 END TEST app_repeat 00:11:58.131 13:31:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:58.131 ************************************ 00:11:58.131 13:31:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:58.131 13:31:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:58.131 13:31:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.131 13:31:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.131 13:31:50 event -- common/autotest_common.sh@10 -- # set +x 00:11:58.390 ************************************ 00:11:58.390 START TEST cpu_locks 00:11:58.390 ************************************ 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:58.390 * Looking for test storage... 00:11:58.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.390 13:31:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.390 --rc genhtml_branch_coverage=1 00:11:58.390 --rc genhtml_function_coverage=1 00:11:58.390 --rc genhtml_legend=1 00:11:58.390 --rc geninfo_all_blocks=1 00:11:58.390 --rc geninfo_unexecuted_blocks=1 00:11:58.390 00:11:58.390 ' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.390 --rc genhtml_branch_coverage=1 00:11:58.390 --rc genhtml_function_coverage=1 00:11:58.390 --rc genhtml_legend=1 00:11:58.390 --rc geninfo_all_blocks=1 00:11:58.390 --rc geninfo_unexecuted_blocks=1 00:11:58.390 00:11:58.390 ' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.390 --rc genhtml_branch_coverage=1 00:11:58.390 --rc genhtml_function_coverage=1 00:11:58.390 --rc genhtml_legend=1 00:11:58.390 --rc geninfo_all_blocks=1 00:11:58.390 --rc geninfo_unexecuted_blocks=1 00:11:58.390 00:11:58.390 ' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.390 --rc genhtml_branch_coverage=1 00:11:58.390 --rc genhtml_function_coverage=1 00:11:58.390 --rc genhtml_legend=1 00:11:58.390 --rc geninfo_all_blocks=1 00:11:58.390 --rc geninfo_unexecuted_blocks=1 00:11:58.390 00:11:58.390 ' 00:11:58.390 13:31:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:58.390 13:31:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:58.390 13:31:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:58.390 13:31:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.390 13:31:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:58.390 ************************************ 00:11:58.390 START TEST default_locks 00:11:58.390 ************************************ 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60145 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60145 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60145 ']' 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.390 13:31:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:58.649 [2024-11-20 13:31:50.493642] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:11:58.649 [2024-11-20 13:31:50.493793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60145 ] 00:11:58.909 [2024-11-20 13:31:50.692675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.909 [2024-11-20 13:31:50.794957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.905 13:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.905 13:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:59.905 13:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60145 00:11:59.905 13:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60145 00:11:59.905 13:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60145 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60145 ']' 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60145 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60145 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.183 killing process with pid 60145 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60145' 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60145 00:12:00.183 13:31:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60145 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60145 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60145 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60145 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60145 ']' 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:02.707 ERROR: process (pid: 60145) is no longer running 00:12:02.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60145) - No such process 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:02.707 00:12:02.707 real 0m3.979s 00:12:02.707 user 0m4.135s 00:12:02.707 sys 0m0.608s 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.707 ************************************ 00:12:02.707 END TEST default_locks 00:12:02.707 ************************************ 00:12:02.707 13:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:02.707 13:31:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:02.707 13:31:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.707 13:31:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.707 13:31:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:02.707 ************************************ 00:12:02.707 START TEST default_locks_via_rpc 00:12:02.707 ************************************ 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60219 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60219 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60219 ']' 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.708 13:31:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.708 [2024-11-20 13:31:54.525646] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:02.708 [2024-11-20 13:31:54.525794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:12:02.966 [2024-11-20 13:31:54.745175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.966 [2024-11-20 13:31:54.866394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60219 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:03.900 13:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60219 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60219 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60219 ']' 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60219 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60219 00:12:04.158 killing process with pid 60219 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60219' 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60219 00:12:04.158 13:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60219 00:12:06.688 ************************************ 00:12:06.688 END TEST default_locks_via_rpc 00:12:06.688 ************************************ 00:12:06.688 00:12:06.688 real 0m3.859s 00:12:06.688 user 0m3.988s 00:12:06.688 sys 0m0.614s 00:12:06.688 13:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.688 13:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.688 13:31:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:06.688 13:31:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.688 13:31:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.688 13:31:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:06.688 ************************************ 00:12:06.688 START TEST non_locking_app_on_locked_coremask 00:12:06.688 ************************************ 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60288 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60288 /var/tmp/spdk.sock 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60288 ']' 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.688 13:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:06.688 [2024-11-20 13:31:58.436168] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:06.688 [2024-11-20 13:31:58.436308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:12:06.688 [2024-11-20 13:31:58.609310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.946 [2024-11-20 13:31:58.734711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60310 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60310 /var/tmp/spdk2.sock 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60310 ']' 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.512 13:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:07.770 [2024-11-20 13:31:59.667373] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:07.770 [2024-11-20 13:31:59.667509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:12:08.028 [2024-11-20 13:31:59.861725] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:08.028 [2024-11-20 13:31:59.861788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.286 [2024-11-20 13:32:00.076270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.655 13:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.655 13:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:09.655 13:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60288 00:12:09.655 13:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60288 00:12:09.655 13:32:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60288 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60288 ']' 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60288 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60288 00:12:10.589 killing process with pid 60288 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60288' 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60288 00:12:10.589 13:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60288 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60310 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60310 ']' 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60310 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60310 00:12:14.772 killing process with pid 60310 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60310' 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60310 00:12:14.772 13:32:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60310 00:12:17.311 ************************************ 00:12:17.311 END TEST non_locking_app_on_locked_coremask 00:12:17.311 ************************************ 00:12:17.311 00:12:17.311 real 0m10.529s 00:12:17.311 user 0m11.203s 00:12:17.311 sys 0m1.276s 00:12:17.311 13:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.311 13:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 13:32:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:17.311 13:32:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.311 13:32:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.311 13:32:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 ************************************ 00:12:17.311 START TEST locking_app_on_unlocked_coremask 00:12:17.311 ************************************ 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60448 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60448 /var/tmp/spdk.sock 00:12:17.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60448 ']' 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.311 13:32:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 [2024-11-20 13:32:08.988053] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:17.311 [2024-11-20 13:32:08.988479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:12:17.311 [2024-11-20 13:32:09.208487] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:17.311 [2024-11-20 13:32:09.208821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.311 [2024-11-20 13:32:09.318809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60464 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60464 /var/tmp/spdk2.sock 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60464 ']' 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.254 13:32:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:18.254 [2024-11-20 13:32:10.188300] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:18.254 [2024-11-20 13:32:10.188468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:12:18.511 [2024-11-20 13:32:10.385908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.770 [2024-11-20 13:32:10.597392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.195 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.195 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:20.195 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60464 00:12:20.195 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:20.195 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60464 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60448 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60448 ']' 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60448 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.129 13:32:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60448 00:12:21.129 killing process with pid 60448 00:12:21.129 13:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.129 13:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.129 13:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60448' 00:12:21.129 13:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60448 00:12:21.129 13:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60448 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60464 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60464 ']' 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60464 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60464 00:12:26.391 killing process with pid 60464 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60464' 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60464 00:12:26.391 13:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60464 00:12:27.764 ************************************ 00:12:27.764 END TEST locking_app_on_unlocked_coremask 00:12:27.764 ************************************ 00:12:27.764 00:12:27.764 real 0m10.788s 00:12:27.764 user 0m11.398s 00:12:27.764 sys 0m1.217s 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:27.764 13:32:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:27.764 13:32:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.764 13:32:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.764 13:32:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:27.764 ************************************ 00:12:27.764 START TEST locking_app_on_locked_coremask 00:12:27.764 ************************************ 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60599 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60599 /var/tmp/spdk.sock 00:12:27.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60599 ']' 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.764 13:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.021 [2024-11-20 13:32:19.833213] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:28.021 [2024-11-20 13:32:19.833377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60599 ] 00:12:28.021 [2024-11-20 13:32:20.016578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.279 [2024-11-20 13:32:20.178246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60626 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60626 /var/tmp/spdk2.sock 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60626 /var/tmp/spdk2.sock 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60626 /var/tmp/spdk2.sock 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60626 ']' 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:29.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.212 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:29.212 [2024-11-20 13:32:21.225535] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:29.213 [2024-11-20 13:32:21.225780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60626 ] 00:12:29.471 [2024-11-20 13:32:21.438156] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60599 has claimed it. 00:12:29.471 [2024-11-20 13:32:21.438258] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:30.037 ERROR: process (pid: 60626) is no longer running 00:12:30.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60626) - No such process 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60599 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60599 00:12:30.037 13:32:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60599 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60599 ']' 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60599 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60599 00:12:30.604 killing process with pid 60599 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60599' 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60599 00:12:30.604 13:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60599 00:12:33.136 ************************************ 00:12:33.136 END TEST locking_app_on_locked_coremask 00:12:33.136 ************************************ 00:12:33.136 00:12:33.136 real 0m4.875s 00:12:33.136 user 0m5.431s 00:12:33.136 sys 0m0.816s 00:12:33.136 13:32:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.136 13:32:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:33.136 13:32:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:33.136 13:32:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.136 13:32:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.136 13:32:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:33.136 ************************************ 00:12:33.136 START TEST locking_overlapped_coremask 00:12:33.136 ************************************ 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60690 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60690 /var/tmp/spdk.sock 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60690 ']' 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.136 13:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:33.136 [2024-11-20 13:32:24.742729] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:33.136 [2024-11-20 13:32:24.742916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:12:33.136 [2024-11-20 13:32:24.939409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.136 [2024-11-20 13:32:25.046333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.136 [2024-11-20 13:32:25.046456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.136 [2024-11-20 13:32:25.046475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60708 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60708 /var/tmp/spdk2.sock 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60708 /var/tmp/spdk2.sock 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60708 /var/tmp/spdk2.sock 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60708 ']' 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.070 13:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:34.070 [2024-11-20 13:32:25.935952] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:34.070 [2024-11-20 13:32:25.936095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 00:12:34.328 [2024-11-20 13:32:26.147393] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60690 has claimed it. 00:12:34.328 [2024-11-20 13:32:26.147505] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:34.894 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60708) - No such process 00:12:34.894 ERROR: process (pid: 60708) is no longer running 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60690 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60690 ']' 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60690 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60690 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.894 killing process with pid 60690 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60690' 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60690 00:12:34.894 13:32:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60690 00:12:36.792 00:12:36.792 real 0m4.204s 00:12:36.792 user 0m11.567s 00:12:36.792 sys 0m0.587s 00:12:36.792 13:32:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.792 13:32:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:36.792 ************************************ 00:12:36.792 END TEST locking_overlapped_coremask 00:12:36.792 ************************************ 00:12:37.050 13:32:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:37.050 13:32:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.050 13:32:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.050 13:32:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:37.050 ************************************ 00:12:37.050 START TEST locking_overlapped_coremask_via_rpc 00:12:37.050 ************************************ 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60772 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60772 /var/tmp/spdk.sock 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60772 ']' 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.050 13:32:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.050 [2024-11-20 13:32:28.970723] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:37.050 [2024-11-20 13:32:28.970895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60772 ] 00:12:37.308 [2024-11-20 13:32:29.145688] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:37.308 [2024-11-20 13:32:29.145750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.308 [2024-11-20 13:32:29.251096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.308 [2024-11-20 13:32:29.251196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.309 [2024-11-20 13:32:29.251200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60795 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60795 /var/tmp/spdk2.sock 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60795 ']' 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.242 13:32:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.242 [2024-11-20 13:32:30.153461] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:38.242 [2024-11-20 13:32:30.153666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:12:38.501 [2024-11-20 13:32:30.356742] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:38.501 [2024-11-20 13:32:30.356804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.759 [2024-11-20 13:32:30.573737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.759 [2024-11-20 13:32:30.576956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.759 [2024-11-20 13:32:30.576969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.289 [2024-11-20 13:32:32.977195] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60772 has claimed it. 00:12:41.289 request: 00:12:41.289 { 00:12:41.289 "method": "framework_enable_cpumask_locks", 00:12:41.289 "req_id": 1 00:12:41.289 } 00:12:41.289 Got JSON-RPC error response 00:12:41.289 response: 00:12:41.289 { 00:12:41.289 "code": -32603, 00:12:41.289 "message": "Failed to claim CPU core: 2" 00:12:41.289 } 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60772 /var/tmp/spdk.sock 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60772 ']' 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.289 13:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60795 /var/tmp/spdk2.sock 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60795 ']' 00:12:41.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.547 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 ************************************ 00:12:41.805 END TEST locking_overlapped_coremask_via_rpc 00:12:41.805 ************************************ 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:41.805 00:12:41.805 real 0m4.893s 00:12:41.805 user 0m2.037s 00:12:41.805 sys 0m0.219s 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.805 13:32:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 13:32:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:41.805 13:32:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60772 ]] 00:12:41.805 13:32:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60772 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60772 ']' 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60772 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60772 00:12:41.805 killing process with pid 60772 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60772' 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60772 00:12:41.805 13:32:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60772 00:12:44.335 13:32:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60795 ]] 00:12:44.335 13:32:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60795 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60795 ']' 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60795 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60795 00:12:44.335 killing process with pid 60795 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60795' 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60795 00:12:44.335 13:32:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60795 00:12:46.237 13:32:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:46.237 13:32:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:46.237 13:32:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60772 ]] 00:12:46.237 13:32:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60772 00:12:46.237 Process with pid 60772 is not found 00:12:46.237 Process with pid 60795 is not found 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60772 ']' 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60772 00:12:46.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60772) - No such process 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60772 is not found' 00:12:46.238 13:32:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60795 ]] 00:12:46.238 13:32:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60795 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60795 ']' 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60795 00:12:46.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60795) - No such process 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60795 is not found' 00:12:46.238 13:32:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:46.238 00:12:46.238 real 0m47.942s 00:12:46.238 user 1m26.035s 00:12:46.238 sys 0m6.271s 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.238 13:32:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:46.238 ************************************ 00:12:46.238 END TEST cpu_locks 00:12:46.238 ************************************ 00:12:46.238 00:12:46.238 real 1m19.534s 00:12:46.238 user 2m32.299s 00:12:46.238 sys 0m10.088s 00:12:46.238 13:32:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.238 13:32:38 event -- common/autotest_common.sh@10 -- # set +x 00:12:46.238 ************************************ 00:12:46.238 END TEST event 00:12:46.238 ************************************ 00:12:46.238 13:32:38 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:46.238 13:32:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.238 13:32:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.238 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.238 ************************************ 00:12:46.238 START TEST thread 00:12:46.238 ************************************ 00:12:46.238 13:32:38 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:46.238 * Looking for test storage... 00:12:46.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.496 13:32:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.496 13:32:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.496 13:32:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.496 13:32:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.496 13:32:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.496 13:32:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.496 13:32:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.496 13:32:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.496 13:32:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.496 13:32:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.496 13:32:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.496 13:32:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:46.496 13:32:38 thread -- scripts/common.sh@345 -- # : 1 00:12:46.496 13:32:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.496 13:32:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.496 13:32:38 thread -- scripts/common.sh@365 -- # decimal 1 00:12:46.496 13:32:38 thread -- scripts/common.sh@353 -- # local d=1 00:12:46.496 13:32:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.496 13:32:38 thread -- scripts/common.sh@355 -- # echo 1 00:12:46.496 13:32:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.496 13:32:38 thread -- scripts/common.sh@366 -- # decimal 2 00:12:46.496 13:32:38 thread -- scripts/common.sh@353 -- # local d=2 00:12:46.496 13:32:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.496 13:32:38 thread -- scripts/common.sh@355 -- # echo 2 00:12:46.496 13:32:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.496 13:32:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.496 13:32:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.496 13:32:38 thread -- scripts/common.sh@368 -- # return 0 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.496 --rc genhtml_branch_coverage=1 00:12:46.496 --rc genhtml_function_coverage=1 00:12:46.496 --rc genhtml_legend=1 00:12:46.496 --rc geninfo_all_blocks=1 00:12:46.496 --rc geninfo_unexecuted_blocks=1 00:12:46.496 00:12:46.496 ' 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.496 --rc genhtml_branch_coverage=1 00:12:46.496 --rc genhtml_function_coverage=1 00:12:46.496 --rc genhtml_legend=1 00:12:46.496 --rc geninfo_all_blocks=1 00:12:46.496 --rc geninfo_unexecuted_blocks=1 00:12:46.496 00:12:46.496 ' 00:12:46.496 13:32:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.496 --rc genhtml_branch_coverage=1 00:12:46.496 --rc genhtml_function_coverage=1 00:12:46.496 --rc genhtml_legend=1 00:12:46.496 --rc geninfo_all_blocks=1 00:12:46.497 --rc geninfo_unexecuted_blocks=1 00:12:46.497 00:12:46.497 ' 00:12:46.497 13:32:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.497 --rc genhtml_branch_coverage=1 00:12:46.497 --rc genhtml_function_coverage=1 00:12:46.497 --rc genhtml_legend=1 00:12:46.497 --rc geninfo_all_blocks=1 00:12:46.497 --rc geninfo_unexecuted_blocks=1 00:12:46.497 00:12:46.497 ' 00:12:46.497 13:32:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:46.497 13:32:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:46.497 13:32:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.497 13:32:38 thread -- common/autotest_common.sh@10 -- # set +x 00:12:46.497 ************************************ 00:12:46.497 START TEST thread_poller_perf 00:12:46.497 ************************************ 00:12:46.497 13:32:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:46.497 [2024-11-20 13:32:38.468360] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:46.497 [2024-11-20 13:32:38.468772] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:12:46.755 [2024-11-20 13:32:38.680444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.014 [2024-11-20 13:32:38.818046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.014 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:48.393 [2024-11-20T13:32:40.432Z] ====================================== 00:12:48.393 [2024-11-20T13:32:40.432Z] busy:2213139059 (cyc) 00:12:48.393 [2024-11-20T13:32:40.432Z] total_run_count: 293000 00:12:48.393 [2024-11-20T13:32:40.432Z] tsc_hz: 2200000000 (cyc) 00:12:48.393 [2024-11-20T13:32:40.432Z] ====================================== 00:12:48.393 [2024-11-20T13:32:40.432Z] poller_cost: 7553 (cyc), 3433 (nsec) 00:12:48.393 00:12:48.393 real 0m1.639s 00:12:48.393 user 0m1.421s 00:12:48.393 sys 0m0.107s 00:12:48.393 13:32:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.393 ************************************ 00:12:48.393 END TEST thread_poller_perf 00:12:48.393 ************************************ 00:12:48.393 13:32:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:48.393 13:32:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:48.393 13:32:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:48.393 13:32:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.393 13:32:40 thread -- common/autotest_common.sh@10 -- # set +x 00:12:48.393 ************************************ 00:12:48.393 START TEST thread_poller_perf 00:12:48.393 ************************************ 00:12:48.393 13:32:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:48.393 [2024-11-20 13:32:40.152936] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:48.393 [2024-11-20 13:32:40.153117] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61027 ] 00:12:48.393 [2024-11-20 13:32:40.340099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.652 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:48.652 [2024-11-20 13:32:40.477342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.027 [2024-11-20T13:32:42.066Z] ====================================== 00:12:50.027 [2024-11-20T13:32:42.066Z] busy:2204879762 (cyc) 00:12:50.027 [2024-11-20T13:32:42.066Z] total_run_count: 3372000 00:12:50.027 [2024-11-20T13:32:42.066Z] tsc_hz: 2200000000 (cyc) 00:12:50.027 [2024-11-20T13:32:42.066Z] ====================================== 00:12:50.027 [2024-11-20T13:32:42.066Z] poller_cost: 653 (cyc), 296 (nsec) 00:12:50.027 ************************************ 00:12:50.027 END TEST thread_poller_perf 00:12:50.027 ************************************ 00:12:50.027 00:12:50.027 real 0m1.642s 00:12:50.027 user 0m1.429s 00:12:50.027 sys 0m0.101s 00:12:50.027 13:32:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.027 13:32:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:50.027 13:32:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:50.027 ************************************ 00:12:50.027 END TEST thread 00:12:50.027 ************************************ 00:12:50.027 00:12:50.027 real 0m3.583s 00:12:50.027 user 0m3.032s 00:12:50.027 sys 0m0.325s 00:12:50.027 13:32:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.027 13:32:41 thread -- common/autotest_common.sh@10 -- # set +x 00:12:50.027 13:32:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:50.027 13:32:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:50.027 13:32:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.027 13:32:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.027 13:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.027 ************************************ 00:12:50.028 START TEST app_cmdline 00:12:50.028 ************************************ 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:50.028 * Looking for test storage... 00:12:50.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.028 13:32:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.028 --rc genhtml_branch_coverage=1 00:12:50.028 --rc genhtml_function_coverage=1 00:12:50.028 --rc genhtml_legend=1 00:12:50.028 --rc geninfo_all_blocks=1 00:12:50.028 --rc geninfo_unexecuted_blocks=1 00:12:50.028 00:12:50.028 ' 00:12:50.028 13:32:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:50.028 13:32:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61116 00:12:50.028 13:32:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:50.028 13:32:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61116 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61116 ']' 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.028 13:32:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:50.286 [2024-11-20 13:32:42.124317] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:50.286 [2024-11-20 13:32:42.124569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:12:50.286 [2024-11-20 13:32:42.315690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.556 [2024-11-20 13:32:42.418572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:51.499 { 00:12:51.499 "version": "SPDK v25.01-pre git sha1 b6a8866f3", 00:12:51.499 "fields": { 00:12:51.499 "major": 25, 00:12:51.499 "minor": 1, 00:12:51.499 "patch": 0, 00:12:51.499 "suffix": "-pre", 00:12:51.499 "commit": "b6a8866f3" 00:12:51.499 } 00:12:51.499 } 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:51.499 13:32:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.499 13:32:43 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:51.758 13:32:43 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:51.758 request: 00:12:51.758 { 00:12:51.758 "method": "env_dpdk_get_mem_stats", 00:12:51.758 "req_id": 1 00:12:51.758 } 00:12:51.758 Got JSON-RPC error response 00:12:51.758 response: 00:12:51.758 { 00:12:51.758 "code": -32601, 00:12:51.758 "message": "Method not found" 00:12:51.758 } 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.017 13:32:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61116 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61116 ']' 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61116 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61116 00:12:52.017 killing process with pid 61116 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61116' 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@973 -- # kill 61116 00:12:52.017 13:32:43 app_cmdline -- common/autotest_common.sh@978 -- # wait 61116 00:12:53.922 ************************************ 00:12:53.922 END TEST app_cmdline 00:12:53.922 ************************************ 00:12:53.922 00:12:53.922 real 0m4.073s 00:12:53.922 user 0m4.673s 00:12:53.922 sys 0m0.513s 00:12:53.922 13:32:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.922 13:32:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:53.922 13:32:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:53.922 13:32:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.922 13:32:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.922 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.922 ************************************ 00:12:53.922 START TEST version 00:12:53.922 ************************************ 00:12:53.922 13:32:45 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:54.179 * Looking for test storage... 00:12:54.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:54.179 13:32:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.179 13:32:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.179 13:32:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.179 13:32:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.179 13:32:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.179 13:32:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.179 13:32:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.179 13:32:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.179 13:32:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.179 13:32:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.179 13:32:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.179 13:32:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.179 13:32:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.179 13:32:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.179 13:32:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.179 13:32:46 version -- scripts/common.sh@344 -- # case "$op" in 00:12:54.179 13:32:46 version -- scripts/common.sh@345 -- # : 1 00:12:54.179 13:32:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.179 13:32:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.179 13:32:46 version -- scripts/common.sh@365 -- # decimal 1 00:12:54.179 13:32:46 version -- scripts/common.sh@353 -- # local d=1 00:12:54.179 13:32:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.179 13:32:46 version -- scripts/common.sh@355 -- # echo 1 00:12:54.180 13:32:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.180 13:32:46 version -- scripts/common.sh@366 -- # decimal 2 00:12:54.180 13:32:46 version -- scripts/common.sh@353 -- # local d=2 00:12:54.180 13:32:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.180 13:32:46 version -- scripts/common.sh@355 -- # echo 2 00:12:54.180 13:32:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.180 13:32:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.180 13:32:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.180 13:32:46 version -- scripts/common.sh@368 -- # return 0 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.180 --rc genhtml_branch_coverage=1 00:12:54.180 --rc genhtml_function_coverage=1 00:12:54.180 --rc genhtml_legend=1 00:12:54.180 --rc geninfo_all_blocks=1 00:12:54.180 --rc geninfo_unexecuted_blocks=1 00:12:54.180 00:12:54.180 ' 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.180 --rc genhtml_branch_coverage=1 00:12:54.180 --rc genhtml_function_coverage=1 00:12:54.180 --rc genhtml_legend=1 00:12:54.180 --rc geninfo_all_blocks=1 00:12:54.180 --rc geninfo_unexecuted_blocks=1 00:12:54.180 00:12:54.180 ' 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.180 --rc genhtml_branch_coverage=1 00:12:54.180 --rc genhtml_function_coverage=1 00:12:54.180 --rc genhtml_legend=1 00:12:54.180 --rc geninfo_all_blocks=1 00:12:54.180 --rc geninfo_unexecuted_blocks=1 00:12:54.180 00:12:54.180 ' 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.180 --rc genhtml_branch_coverage=1 00:12:54.180 --rc genhtml_function_coverage=1 00:12:54.180 --rc genhtml_legend=1 00:12:54.180 --rc geninfo_all_blocks=1 00:12:54.180 --rc geninfo_unexecuted_blocks=1 00:12:54.180 00:12:54.180 ' 00:12:54.180 13:32:46 version -- app/version.sh@17 -- # get_header_version major 00:12:54.180 13:32:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # cut -f2 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # tr -d '"' 00:12:54.180 13:32:46 version -- app/version.sh@17 -- # major=25 00:12:54.180 13:32:46 version -- app/version.sh@18 -- # get_header_version minor 00:12:54.180 13:32:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # cut -f2 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # tr -d '"' 00:12:54.180 13:32:46 version -- app/version.sh@18 -- # minor=1 00:12:54.180 13:32:46 version -- app/version.sh@19 -- # get_header_version patch 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # cut -f2 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # tr -d '"' 00:12:54.180 13:32:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:54.180 13:32:46 version -- app/version.sh@19 -- # patch=0 00:12:54.180 13:32:46 version -- app/version.sh@20 -- # get_header_version suffix 00:12:54.180 13:32:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # cut -f2 00:12:54.180 13:32:46 version -- app/version.sh@14 -- # tr -d '"' 00:12:54.180 13:32:46 version -- app/version.sh@20 -- # suffix=-pre 00:12:54.180 13:32:46 version -- app/version.sh@22 -- # version=25.1 00:12:54.180 13:32:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:54.180 13:32:46 version -- app/version.sh@28 -- # version=25.1rc0 00:12:54.180 13:32:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:54.180 13:32:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:54.180 13:32:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:54.180 13:32:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:54.180 00:12:54.180 real 0m0.260s 00:12:54.180 user 0m0.179s 00:12:54.180 sys 0m0.110s 00:12:54.180 ************************************ 00:12:54.180 END TEST version 00:12:54.180 ************************************ 00:12:54.180 13:32:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.180 13:32:46 version -- common/autotest_common.sh@10 -- # set +x 00:12:54.441 13:32:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:54.441 13:32:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:54.441 13:32:46 -- spdk/autotest.sh@194 -- # uname -s 00:12:54.441 13:32:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:54.441 13:32:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:54.441 13:32:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:54.441 13:32:46 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:54.441 13:32:46 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:54.441 13:32:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.441 13:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.441 13:32:46 -- common/autotest_common.sh@10 -- # set +x 00:12:54.441 ************************************ 00:12:54.441 START TEST blockdev_nvme 00:12:54.441 ************************************ 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:54.441 * Looking for test storage... 00:12:54.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.441 13:32:46 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.441 --rc genhtml_branch_coverage=1 00:12:54.441 --rc genhtml_function_coverage=1 00:12:54.441 --rc genhtml_legend=1 00:12:54.441 --rc geninfo_all_blocks=1 00:12:54.441 --rc geninfo_unexecuted_blocks=1 00:12:54.441 00:12:54.441 ' 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.441 --rc genhtml_branch_coverage=1 00:12:54.441 --rc genhtml_function_coverage=1 00:12:54.441 --rc genhtml_legend=1 00:12:54.441 --rc geninfo_all_blocks=1 00:12:54.441 --rc geninfo_unexecuted_blocks=1 00:12:54.441 00:12:54.441 ' 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.441 --rc genhtml_branch_coverage=1 00:12:54.441 --rc genhtml_function_coverage=1 00:12:54.441 --rc genhtml_legend=1 00:12:54.441 --rc geninfo_all_blocks=1 00:12:54.441 --rc geninfo_unexecuted_blocks=1 00:12:54.441 00:12:54.441 ' 00:12:54.441 13:32:46 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.441 --rc genhtml_branch_coverage=1 00:12:54.441 --rc genhtml_function_coverage=1 00:12:54.441 --rc genhtml_legend=1 00:12:54.441 --rc geninfo_all_blocks=1 00:12:54.441 --rc geninfo_unexecuted_blocks=1 00:12:54.441 00:12:54.441 ' 00:12:54.441 13:32:46 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:54.441 13:32:46 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:54.441 13:32:46 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:54.441 13:32:46 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61305 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:54.442 13:32:46 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61305 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61305 ']' 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.442 13:32:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.704 [2024-11-20 13:32:46.593406] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:54.704 [2024-11-20 13:32:46.593889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61305 ] 00:12:54.968 [2024-11-20 13:32:46.802933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.968 [2024-11-20 13:32:46.918143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.932 13:32:47 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.932 13:32:47 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:55.932 13:32:47 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:55.932 13:32:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.932 13:32:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 13:32:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:56.206 13:32:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:56.207 13:32:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "70f6ec72-2d96-4b92-8f89-74f7f3d79a26"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "70f6ec72-2d96-4b92-8f89-74f7f3d79a26",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d0cc9e7b-7383-418f-8326-006f3332d5d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d0cc9e7b-7383-418f-8326-006f3332d5d2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f0cfe5bc-3123-4088-90f0-78ad86cba875"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f0cfe5bc-3123-4088-90f0-78ad86cba875",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "43eab56e-8b2a-4d22-933c-fb27c7c47947"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43eab56e-8b2a-4d22-933c-fb27c7c47947",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c66caa66-14bc-4bf3-96d5-2f3e35afc986"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c66caa66-14bc-4bf3-96d5-2f3e35afc986",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "41180d8f-6fb1-448e-8ed1-14c9db366638"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "41180d8f-6fb1-448e-8ed1-14c9db366638",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:56.207 13:32:48 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:56.207 13:32:48 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:56.207 13:32:48 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:56.207 13:32:48 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61305 00:12:56.207 13:32:48 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61305 ']' 00:12:56.207 13:32:48 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61305 00:12:56.207 13:32:48 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:12:56.207 13:32:48 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.207 13:32:48 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61305 00:12:56.470 killing process with pid 61305 00:12:56.470 13:32:48 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.470 13:32:48 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.470 13:32:48 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61305' 00:12:56.470 13:32:48 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61305 00:12:56.470 13:32:48 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61305 00:12:58.372 13:32:50 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:58.372 13:32:50 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:58.372 13:32:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.372 13:32:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.372 13:32:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.372 ************************************ 00:12:58.372 START TEST bdev_hello_world 00:12:58.372 ************************************ 00:12:58.372 13:32:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:58.630 [2024-11-20 13:32:50.489080] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:12:58.630 [2024-11-20 13:32:50.489309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61393 ] 00:12:58.630 [2024-11-20 13:32:50.667735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.889 [2024-11-20 13:32:50.805101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.456 [2024-11-20 13:32:51.447492] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:59.456 [2024-11-20 13:32:51.447551] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:59.456 [2024-11-20 13:32:51.447581] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:59.456 [2024-11-20 13:32:51.450689] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:59.456 [2024-11-20 13:32:51.451265] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:59.456 [2024-11-20 13:32:51.451312] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:59.456 [2024-11-20 13:32:51.451460] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:59.456 00:12:59.456 [2024-11-20 13:32:51.451504] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:00.830 00:13:00.830 real 0m2.074s 00:13:00.830 user 0m1.719s 00:13:00.830 sys 0m0.243s 00:13:00.830 13:32:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.830 13:32:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:00.830 ************************************ 00:13:00.830 END TEST bdev_hello_world 00:13:00.830 ************************************ 00:13:00.830 13:32:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:00.830 13:32:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.830 13:32:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.830 13:32:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.830 ************************************ 00:13:00.830 START TEST bdev_bounds 00:13:00.830 ************************************ 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:00.830 Process bdevio pid: 61442 00:13:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61442 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61442' 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61442 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61442 ']' 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.830 13:32:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:00.831 [2024-11-20 13:32:52.567407] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:00.831 [2024-11-20 13:32:52.567571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61442 ] 00:13:00.831 [2024-11-20 13:32:52.742132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.831 [2024-11-20 13:32:52.850100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.831 [2024-11-20 13:32:52.850169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.831 [2024-11-20 13:32:52.850169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.766 13:32:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.766 13:32:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:01.766 13:32:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:01.766 I/O targets: 00:13:01.766 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:01.766 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:01.766 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:01.766 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:01.766 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:01.766 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:01.766 00:13:01.766 00:13:01.766 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.766 http://cunit.sourceforge.net/ 00:13:01.766 00:13:01.766 00:13:01.766 Suite: bdevio tests on: Nvme3n1 00:13:01.766 Test: blockdev write read block ...passed 00:13:01.766 Test: blockdev write zeroes read block ...passed 00:13:01.766 Test: blockdev write zeroes read no split ...passed 00:13:01.766 Test: blockdev write zeroes read split ...passed 00:13:01.766 Test: blockdev write zeroes read split partial ...passed 00:13:01.766 Test: blockdev reset ...[2024-11-20 13:32:53.723513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:01.766 passed 00:13:01.766 Test: blockdev write read 8 blocks ...[2024-11-20 13:32:53.727321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:01.766 passed 00:13:01.766 Test: blockdev write read size > 128k ...passed 00:13:01.766 Test: blockdev write read invalid size ...passed 00:13:01.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:01.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:01.766 Test: blockdev write read max offset ...passed 00:13:01.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:01.766 Test: blockdev writev readv 8 blocks ...passed 00:13:01.766 Test: blockdev writev readv 30 x 1block ...passed 00:13:01.766 Test: blockdev writev readv block ...passed 00:13:01.766 Test: blockdev writev readv size > 128k ...passed 00:13:01.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:01.766 Test: blockdev comparev and writev ...[2024-11-20 13:32:53.734837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9e0a000 len:0x1000 00:13:01.766 [2024-11-20 13:32:53.734921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:01.766 passed 00:13:01.766 Test: blockdev nvme passthru rw ...passed 00:13:01.766 Test: blockdev nvme passthru vendor specific ...passed 00:13:01.766 Test: blockdev nvme admin passthru ...[2024-11-20 13:32:53.735716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:01.766 [2024-11-20 13:32:53.735768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:01.766 passed 00:13:01.766 Test: blockdev copy ...passed 00:13:01.766 Suite: bdevio tests on: Nvme2n3 00:13:01.766 Test: blockdev write read block ...passed 00:13:01.766 Test: blockdev write zeroes read block ...passed 00:13:01.766 Test: blockdev write zeroes read no split ...passed 00:13:01.766 Test: blockdev write zeroes read split ...passed 00:13:02.024 Test: blockdev write zeroes read split partial ...passed 00:13:02.025 Test: blockdev reset ...[2024-11-20 13:32:53.805205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:02.025 [2024-11-20 13:32:53.809793] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:13:02.025 00:13:02.025 Test: blockdev write read 8 blocks ...passed 00:13:02.025 Test: blockdev write read size > 128k ...passed 00:13:02.025 Test: blockdev write read invalid size ...passed 00:13:02.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.025 Test: blockdev write read max offset ...passed 00:13:02.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.025 Test: blockdev writev readv 8 blocks ...passed 00:13:02.025 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.025 Test: blockdev writev readv block ...passed 00:13:02.025 Test: blockdev writev readv size > 128k ...passed 00:13:02.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.025 Test: blockdev comparev and writev ...[2024-11-20 13:32:53.819110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d006000 len:0x1000 00:13:02.025 [2024-11-20 13:32:53.819189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev nvme passthru rw ...passed 00:13:02.025 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.025 Test: blockdev nvme admin passthru ...[2024-11-20 13:32:53.820093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:02.025 [2024-11-20 13:32:53.820144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev copy ...passed 00:13:02.025 Suite: bdevio tests on: Nvme2n2 00:13:02.025 Test: blockdev write read block ...passed 00:13:02.025 Test: blockdev write zeroes read block ...passed 00:13:02.025 Test: blockdev write zeroes read no split ...passed 00:13:02.025 Test: blockdev write zeroes read split ...passed 00:13:02.025 Test: blockdev write zeroes read split partial ...passed 00:13:02.025 Test: blockdev reset ...[2024-11-20 13:32:53.883658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:02.025 passed 00:13:02.025 Test: blockdev write read 8 blocks ...[2024-11-20 13:32:53.887579] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:02.025 passed 00:13:02.025 Test: blockdev write read size > 128k ...passed 00:13:02.025 Test: blockdev write read invalid size ...passed 00:13:02.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.025 Test: blockdev write read max offset ...passed 00:13:02.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.025 Test: blockdev writev readv 8 blocks ...passed 00:13:02.025 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.025 Test: blockdev writev readv block ...passed 00:13:02.025 Test: blockdev writev readv size > 128k ...passed 00:13:02.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.025 Test: blockdev comparev and writev ...[2024-11-20 13:32:53.894183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:13:02.025 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d563c000 len:0x1000 00:13:02.025 [2024-11-20 13:32:53.894394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.025 Test: blockdev nvme admin passthru ...[2024-11-20 13:32:53.895108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:02.025 [2024-11-20 13:32:53.895156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev copy ...passed 00:13:02.025 Suite: bdevio tests on: Nvme2n1 00:13:02.025 Test: blockdev write read block ...passed 00:13:02.025 Test: blockdev write zeroes read block ...passed 00:13:02.025 Test: blockdev write zeroes read no split ...passed 00:13:02.025 Test: blockdev write zeroes read split ...passed 00:13:02.025 Test: blockdev write zeroes read split partial ...passed 00:13:02.025 Test: blockdev reset ...[2024-11-20 13:32:53.963920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:02.025 [2024-11-20 13:32:53.967983] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:02.025 passed 00:13:02.025 Test: blockdev write read 8 blocks ...passed 00:13:02.025 Test: blockdev write read size > 128k ...passed 00:13:02.025 Test: blockdev write read invalid size ...passed 00:13:02.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.025 Test: blockdev write read max offset ...passed 00:13:02.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.025 Test: blockdev writev readv 8 blocks ...passed 00:13:02.025 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.025 Test: blockdev writev readv block ...passed 00:13:02.025 Test: blockdev writev readv size > 128k ...passed 00:13:02.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.025 Test: blockdev comparev and writev ...[2024-11-20 13:32:53.975427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5638000 len:0x1000 00:13:02.025 [2024-11-20 13:32:53.975500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev nvme passthru rw ...passed 00:13:02.025 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:53.976184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:02.025 [2024-11-20 13:32:53.976227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev nvme admin passthru ...passed 00:13:02.025 Test: blockdev copy ...passed 00:13:02.025 Suite: bdevio tests on: Nvme1n1 00:13:02.025 Test: blockdev write read block ...passed 00:13:02.025 Test: blockdev write zeroes read block ...passed 00:13:02.025 Test: blockdev write zeroes read no split ...passed 00:13:02.025 Test: blockdev write zeroes read split ...passed 00:13:02.025 Test: blockdev write zeroes read split partial ...passed 00:13:02.025 Test: blockdev reset ...[2024-11-20 13:32:54.044196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:02.025 [2024-11-20 13:32:54.047781] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:02.025 passed 00:13:02.025 Test: blockdev write read 8 blocks ...passed 00:13:02.025 Test: blockdev write read size > 128k ...passed 00:13:02.025 Test: blockdev write read invalid size ...passed 00:13:02.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.025 Test: blockdev write read max offset ...passed 00:13:02.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.025 Test: blockdev writev readv 8 blocks ...passed 00:13:02.025 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.025 Test: blockdev writev readv block ...passed 00:13:02.025 Test: blockdev writev readv size > 128k ...passed 00:13:02.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.025 Test: blockdev comparev and writev ...[2024-11-20 13:32:54.055045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5634000 len:0x1000 00:13:02.025 [2024-11-20 13:32:54.055114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.025 Test: blockdev nvme passthru rw ...passed 00:13:02.025 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:54.055892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:02.025 [2024-11-20 13:32:54.055966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:02.025 passed 00:13:02.284 Test: blockdev nvme admin passthru ...passed 00:13:02.284 Test: blockdev copy ...passed 00:13:02.284 Suite: bdevio tests on: Nvme0n1 00:13:02.284 Test: blockdev write read block ...passed 00:13:02.284 Test: blockdev write zeroes read block ...passed 00:13:02.284 Test: blockdev write zeroes read no split ...passed 00:13:02.284 Test: blockdev write zeroes read split ...passed 00:13:02.284 Test: blockdev write zeroes read split partial ...passed 00:13:02.284 Test: blockdev reset ...[2024-11-20 13:32:54.126555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:02.284 [2024-11-20 13:32:54.130357] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:02.284 passed 00:13:02.284 Test: blockdev write read 8 blocks ...passed 00:13:02.284 Test: blockdev write read size > 128k ...passed 00:13:02.284 Test: blockdev write read invalid size ...passed 00:13:02.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.284 Test: blockdev write read max offset ...passed 00:13:02.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.284 Test: blockdev writev readv 8 blocks ...passed 00:13:02.284 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.284 Test: blockdev writev readv block ...passed 00:13:02.284 Test: blockdev writev readv size > 128k ...passed 00:13:02.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.284 Test: blockdev comparev and writev ...passed 00:13:02.284 Test: blockdev nvme passthru rw ...[2024-11-20 13:32:54.141183] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:02.284 separate metadata which is not supported yet. 00:13:02.284 passed 00:13:02.284 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:54.141651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:02.284 [2024-11-20 13:32:54.141707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:02.284 passed 00:13:02.284 Test: blockdev nvme admin passthru ...passed 00:13:02.284 Test: blockdev copy ...passed 00:13:02.284 00:13:02.284 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.284 suites 6 6 n/a 0 0 00:13:02.284 tests 138 138 138 0 0 00:13:02.284 asserts 893 893 893 0 n/a 00:13:02.284 00:13:02.284 Elapsed time = 1.298 seconds 00:13:02.284 0 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61442 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61442 ']' 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61442 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61442 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61442' 00:13:02.284 killing process with pid 61442 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61442 00:13:02.284 13:32:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61442 00:13:03.218 13:32:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:03.218 00:13:03.218 real 0m2.623s 00:13:03.218 user 0m6.797s 00:13:03.218 sys 0m0.354s 00:13:03.218 13:32:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.218 13:32:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:03.218 ************************************ 00:13:03.218 END TEST bdev_bounds 00:13:03.218 ************************************ 00:13:03.218 13:32:55 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:03.218 13:32:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:03.218 13:32:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.218 13:32:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.218 ************************************ 00:13:03.218 START TEST bdev_nbd 00:13:03.218 ************************************ 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:03.218 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61496 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61496 /var/tmp/spdk-nbd.sock 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61496 ']' 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:03.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.219 13:32:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:03.219 [2024-11-20 13:32:55.251223] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:03.219 [2024-11-20 13:32:55.251618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.478 [2024-11-20 13:32:55.434989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.737 [2024-11-20 13:32:55.564551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:04.308 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.309 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.877 1+0 records in 00:13:04.877 1+0 records out 00:13:04.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727341 s, 5.6 MB/s 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.877 13:32:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.136 1+0 records in 00:13:05.136 1+0 records out 00:13:05.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551624 s, 7.4 MB/s 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.136 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.393 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.394 1+0 records in 00:13:05.394 1+0 records out 00:13:05.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566076 s, 7.2 MB/s 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.394 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.651 1+0 records in 00:13:05.651 1+0 records out 00:13:05.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562429 s, 7.3 MB/s 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.651 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.908 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.176 1+0 records in 00:13:06.176 1+0 records out 00:13:06.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621483 s, 6.6 MB/s 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.176 13:32:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.436 1+0 records in 00:13:06.436 1+0 records out 00:13:06.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577336 s, 7.1 MB/s 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.436 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd0", 00:13:07.004 "bdev_name": "Nvme0n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd1", 00:13:07.004 "bdev_name": "Nvme1n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd2", 00:13:07.004 "bdev_name": "Nvme2n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd3", 00:13:07.004 "bdev_name": "Nvme2n2" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd4", 00:13:07.004 "bdev_name": "Nvme2n3" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd5", 00:13:07.004 "bdev_name": "Nvme3n1" 00:13:07.004 } 00:13:07.004 ]' 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd0", 00:13:07.004 "bdev_name": "Nvme0n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd1", 00:13:07.004 "bdev_name": "Nvme1n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd2", 00:13:07.004 "bdev_name": "Nvme2n1" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd3", 00:13:07.004 "bdev_name": "Nvme2n2" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd4", 00:13:07.004 "bdev_name": "Nvme2n3" 00:13:07.004 }, 00:13:07.004 { 00:13:07.004 "nbd_device": "/dev/nbd5", 00:13:07.004 "bdev_name": "Nvme3n1" 00:13:07.004 } 00:13:07.004 ]' 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.004 13:32:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.262 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.551 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.809 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:08.067 13:32:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.067 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.325 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.583 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.148 13:33:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:09.407 /dev/nbd0 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.407 1+0 records in 00:13:09.407 1+0 records out 00:13:09.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062848 s, 6.5 MB/s 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.407 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:13:09.665 /dev/nbd1 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:09.923 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.924 1+0 records in 00:13:09.924 1+0 records out 00:13:09.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497267 s, 8.2 MB/s 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.924 13:33:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:13:10.182 /dev/nbd10 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.182 1+0 records in 00:13:10.182 1+0 records out 00:13:10.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431768 s, 9.5 MB/s 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.182 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:13:10.441 /dev/nbd11 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.441 1+0 records in 00:13:10.441 1+0 records out 00:13:10.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877301 s, 4.7 MB/s 00:13:10.441 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.699 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:13:10.957 /dev/nbd12 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.957 1+0 records in 00:13:10.957 1+0 records out 00:13:10.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680496 s, 6.0 MB/s 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.957 13:33:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:13:11.216 /dev/nbd13 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.216 1+0 records in 00:13:11.216 1+0 records out 00:13:11.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060501 s, 6.8 MB/s 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.216 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:11.783 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd0", 00:13:11.783 "bdev_name": "Nvme0n1" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd1", 00:13:11.783 "bdev_name": "Nvme1n1" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd10", 00:13:11.783 "bdev_name": "Nvme2n1" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd11", 00:13:11.783 "bdev_name": "Nvme2n2" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd12", 00:13:11.783 "bdev_name": "Nvme2n3" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd13", 00:13:11.783 "bdev_name": "Nvme3n1" 00:13:11.783 } 00:13:11.783 ]' 00:13:11.783 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd0", 00:13:11.783 "bdev_name": "Nvme0n1" 00:13:11.783 }, 00:13:11.783 { 00:13:11.783 "nbd_device": "/dev/nbd1", 00:13:11.783 "bdev_name": "Nvme1n1" 00:13:11.783 }, 00:13:11.783 { 00:13:11.784 "nbd_device": "/dev/nbd10", 00:13:11.784 "bdev_name": "Nvme2n1" 00:13:11.784 }, 00:13:11.784 { 00:13:11.784 "nbd_device": "/dev/nbd11", 00:13:11.784 "bdev_name": "Nvme2n2" 00:13:11.784 }, 00:13:11.784 { 00:13:11.784 "nbd_device": "/dev/nbd12", 00:13:11.784 "bdev_name": "Nvme2n3" 00:13:11.784 }, 00:13:11.784 { 00:13:11.784 "nbd_device": "/dev/nbd13", 00:13:11.784 "bdev_name": "Nvme3n1" 00:13:11.784 } 00:13:11.784 ]' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:11.784 /dev/nbd1 00:13:11.784 /dev/nbd10 00:13:11.784 /dev/nbd11 00:13:11.784 /dev/nbd12 00:13:11.784 /dev/nbd13' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:11.784 /dev/nbd1 00:13:11.784 /dev/nbd10 00:13:11.784 /dev/nbd11 00:13:11.784 /dev/nbd12 00:13:11.784 /dev/nbd13' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:11.784 256+0 records in 00:13:11.784 256+0 records out 00:13:11.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681152 s, 154 MB/s 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:11.784 256+0 records in 00:13:11.784 256+0 records out 00:13:11.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114209 s, 9.2 MB/s 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:11.784 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:12.041 256+0 records in 00:13:12.041 256+0 records out 00:13:12.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126456 s, 8.3 MB/s 00:13:12.041 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.041 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:12.041 256+0 records in 00:13:12.041 256+0 records out 00:13:12.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121146 s, 8.7 MB/s 00:13:12.041 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.041 13:33:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:12.300 256+0 records in 00:13:12.300 256+0 records out 00:13:12.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123098 s, 8.5 MB/s 00:13:12.300 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.300 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:12.300 256+0 records in 00:13:12.300 256+0 records out 00:13:12.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117904 s, 8.9 MB/s 00:13:12.300 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.300 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:12.559 256+0 records in 00:13:12.559 256+0 records out 00:13:12.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132333 s, 7.9 MB/s 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.559 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.818 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.089 13:33:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.355 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.613 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.872 13:33:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.130 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.388 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.388 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.388 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:14.646 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:14.905 malloc_lvol_verify 00:13:14.905 13:33:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:15.163 374fe907-9da3-4f95-9ee4-dcaf6f3831a8 00:13:15.163 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:15.422 ddfb7a9a-7962-4098-af38-99dd55887c71 00:13:15.422 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:15.681 /dev/nbd0 00:13:15.681 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:15.681 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:15.681 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:15.681 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:15.681 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:15.681 mke2fs 1.47.0 (5-Feb-2023) 00:13:15.681 Discarding device blocks: 0/4096 done 00:13:15.681 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:15.681 00:13:15.681 Allocating group tables: 0/1 done 00:13:15.681 Writing inode tables: 0/1 done 00:13:15.681 Creating journal (1024 blocks): done 00:13:15.940 Writing superblocks and filesystem accounting information: 0/1 done 00:13:15.940 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.940 13:33:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61496 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61496 ']' 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61496 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61496 00:13:16.199 killing process with pid 61496 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61496' 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61496 00:13:16.199 13:33:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61496 00:13:17.135 13:33:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:17.135 00:13:17.135 real 0m14.004s 00:13:17.135 user 0m20.861s 00:13:17.135 sys 0m4.103s 00:13:17.135 13:33:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.135 13:33:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:17.135 ************************************ 00:13:17.135 END TEST bdev_nbd 00:13:17.135 ************************************ 00:13:17.394 13:33:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:17.394 13:33:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:13:17.394 skipping fio tests on NVMe due to multi-ns failures. 00:13:17.394 13:33:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:17.394 13:33:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:17.394 13:33:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:17.394 13:33:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:17.394 13:33:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.394 13:33:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.394 ************************************ 00:13:17.394 START TEST bdev_verify 00:13:17.394 ************************************ 00:13:17.394 13:33:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:17.394 [2024-11-20 13:33:09.314004] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:17.394 [2024-11-20 13:33:09.314182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61915 ] 00:13:17.653 [2024-11-20 13:33:09.498009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.653 [2024-11-20 13:33:09.602881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.653 [2024-11-20 13:33:09.602905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.588 Running I/O for 5 seconds... 00:13:20.531 17472.00 IOPS, 68.25 MiB/s [2024-11-20T13:33:13.504Z] 18304.00 IOPS, 71.50 MiB/s [2024-11-20T13:33:14.879Z] 18389.33 IOPS, 71.83 MiB/s [2024-11-20T13:33:15.447Z] 18176.00 IOPS, 71.00 MiB/s [2024-11-20T13:33:15.447Z] 18099.20 IOPS, 70.70 MiB/s 00:13:23.408 Latency(us) 00:13:23.408 [2024-11-20T13:33:15.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.408 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0xbd0bd 00:13:23.408 Nvme0n1 : 5.09 1483.65 5.80 0.00 0.00 86068.98 18350.08 108193.98 00:13:23.408 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:23.408 Nvme0n1 : 5.10 1507.34 5.89 0.00 0.00 84717.56 16562.73 87699.08 00:13:23.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0xa0000 00:13:23.408 Nvme1n1 : 5.09 1483.01 5.79 0.00 0.00 85934.55 19065.02 102951.10 00:13:23.408 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0xa0000 length 0xa0000 00:13:23.408 Nvme1n1 : 5.10 1506.40 5.88 0.00 0.00 84648.70 15966.95 83409.45 00:13:23.408 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0x80000 00:13:23.408 Nvme2n1 : 5.09 1482.41 5.79 0.00 0.00 85847.43 18469.24 109623.85 00:13:23.408 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x80000 length 0x80000 00:13:23.408 Nvme2n1 : 5.10 1505.78 5.88 0.00 0.00 84503.89 16562.73 78166.57 00:13:23.408 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0x80000 00:13:23.408 Nvme2n2 : 5.10 1480.90 5.78 0.00 0.00 85743.16 21686.46 112006.98 00:13:23.408 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x80000 length 0x80000 00:13:23.408 Nvme2n2 : 5.10 1504.53 5.88 0.00 0.00 84426.59 19065.02 78166.57 00:13:23.408 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0x80000 00:13:23.408 Nvme2n3 : 5.10 1480.11 5.78 0.00 0.00 85628.96 17992.61 112006.98 00:13:23.408 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x80000 length 0x80000 00:13:23.408 Nvme2n3 : 5.11 1503.56 5.87 0.00 0.00 84326.54 16920.20 84839.33 00:13:23.408 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x0 length 0x20000 00:13:23.408 Nvme3n1 : 5.10 1479.55 5.78 0.00 0.00 85514.13 12571.00 111053.73 00:13:23.408 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:23.408 Verification LBA range: start 0x20000 length 0x20000 00:13:23.408 Nvme3n1 : 5.11 1503.13 5.87 0.00 0.00 84209.63 11260.28 88175.71 00:13:23.408 [2024-11-20T13:33:15.447Z] =================================================================================================================== 00:13:23.408 [2024-11-20T13:33:15.447Z] Total : 17920.39 70.00 0.00 0.00 85125.31 11260.28 112006.98 00:13:24.785 00:13:24.785 real 0m7.499s 00:13:24.785 user 0m13.866s 00:13:24.785 sys 0m0.249s 00:13:24.785 13:33:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.785 13:33:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:24.785 ************************************ 00:13:24.785 END TEST bdev_verify 00:13:24.785 ************************************ 00:13:24.785 13:33:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:24.785 13:33:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:24.785 13:33:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.785 13:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.785 ************************************ 00:13:24.785 START TEST bdev_verify_big_io 00:13:24.785 ************************************ 00:13:24.785 13:33:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.044 [2024-11-20 13:33:16.862310] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:25.044 [2024-11-20 13:33:16.862515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62013 ] 00:13:25.044 [2024-11-20 13:33:17.046995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.302 [2024-11-20 13:33:17.153410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.302 [2024-11-20 13:33:17.153420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.259 Running I/O for 5 seconds... 00:13:32.082 1929.00 IOPS, 120.56 MiB/s [2024-11-20T13:33:24.121Z] 3239.50 IOPS, 202.47 MiB/s 00:13:32.082 Latency(us) 00:13:32.082 [2024-11-20T13:33:24.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.082 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0xbd0b 00:13:32.082 Nvme0n1 : 5.84 119.49 7.47 0.00 0.00 1015160.32 15728.64 1151527.10 00:13:32.082 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:32.082 Nvme0n1 : 5.77 117.86 7.37 0.00 0.00 1035899.16 16562.73 1075267.03 00:13:32.082 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0xa000 00:13:32.082 Nvme1n1 : 5.93 116.78 7.30 0.00 0.00 1010842.00 91988.71 1616713.54 00:13:32.082 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0xa000 length 0xa000 00:13:32.082 Nvme1n1 : 5.77 120.93 7.56 0.00 0.00 992410.33 77689.95 1037136.99 00:13:32.082 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0x8000 00:13:32.082 Nvme2n1 : 5.93 117.43 7.34 0.00 0.00 969503.77 114390.11 1639591.56 00:13:32.082 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x8000 length 0x8000 00:13:32.082 Nvme2n1 : 5.77 121.91 7.62 0.00 0.00 955370.34 78643.20 865551.83 00:13:32.082 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0x8000 00:13:32.082 Nvme2n2 : 5.99 125.92 7.87 0.00 0.00 880569.95 42181.35 1662469.59 00:13:32.082 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x8000 length 0x8000 00:13:32.082 Nvme2n2 : 5.85 125.91 7.87 0.00 0.00 899047.07 66727.56 907494.87 00:13:32.082 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0x8000 00:13:32.082 Nvme2n3 : 6.02 140.15 8.76 0.00 0.00 766146.65 22163.08 1227787.17 00:13:32.082 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x8000 length 0x8000 00:13:32.082 Nvme2n3 : 5.96 134.19 8.39 0.00 0.00 818246.75 56480.12 949437.91 00:13:32.082 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x0 length 0x2000 00:13:32.082 Nvme3n1 : 6.10 155.85 9.74 0.00 0.00 673500.67 1668.19 1746355.67 00:13:32.082 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.082 Verification LBA range: start 0x2000 length 0x2000 00:13:32.082 Nvme3n1 : 5.97 145.81 9.11 0.00 0.00 732831.64 2308.65 960876.92 00:13:32.082 [2024-11-20T13:33:24.121Z] =================================================================================================================== 00:13:32.082 [2024-11-20T13:33:24.121Z] Total : 1542.21 96.39 0.00 0.00 883225.52 1668.19 1746355.67 00:13:34.039 00:13:34.039 real 0m9.093s 00:13:34.039 user 0m16.981s 00:13:34.039 sys 0m0.291s 00:13:34.039 13:33:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.039 13:33:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.039 ************************************ 00:13:34.039 END TEST bdev_verify_big_io 00:13:34.039 ************************************ 00:13:34.039 13:33:25 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.039 13:33:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:34.039 13:33:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.039 13:33:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.039 ************************************ 00:13:34.039 START TEST bdev_write_zeroes 00:13:34.039 ************************************ 00:13:34.039 13:33:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.039 [2024-11-20 13:33:25.993520] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:34.039 [2024-11-20 13:33:25.993697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62134 ] 00:13:34.297 [2024-11-20 13:33:26.190042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.297 [2024-11-20 13:33:26.317329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.232 Running I/O for 1 seconds... 00:13:36.166 35279.00 IOPS, 137.81 MiB/s 00:13:36.166 Latency(us) 00:13:36.166 [2024-11-20T13:33:28.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.166 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme0n1 : 1.03 5705.98 22.29 0.00 0.00 22374.06 6136.55 112006.98 00:13:36.166 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme1n1 : 1.04 5871.45 22.94 0.00 0.00 21710.57 10485.76 56718.43 00:13:36.166 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme2n1 : 1.04 5862.02 22.90 0.00 0.00 21678.83 10247.45 56480.12 00:13:36.166 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme2n2 : 1.04 5852.64 22.86 0.00 0.00 21624.40 7804.74 56480.12 00:13:36.166 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme2n3 : 1.04 5884.59 22.99 0.00 0.00 21471.68 7923.90 56241.80 00:13:36.166 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:36.166 Nvme3n1 : 1.04 5919.50 23.12 0.00 0.00 21304.42 7596.22 55050.24 00:13:36.166 [2024-11-20T13:33:28.205Z] =================================================================================================================== 00:13:36.166 [2024-11-20T13:33:28.205Z] Total : 35096.18 137.09 0.00 0.00 21689.40 6136.55 112006.98 00:13:37.551 00:13:37.551 real 0m3.263s 00:13:37.551 user 0m2.890s 00:13:37.551 sys 0m0.245s 00:13:37.551 13:33:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.551 13:33:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:37.551 ************************************ 00:13:37.551 END TEST bdev_write_zeroes 00:13:37.551 ************************************ 00:13:37.551 13:33:29 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.551 13:33:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:37.551 13:33:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.551 13:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.551 ************************************ 00:13:37.551 START TEST bdev_json_nonenclosed 00:13:37.551 ************************************ 00:13:37.551 13:33:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.551 [2024-11-20 13:33:29.303285] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:37.551 [2024-11-20 13:33:29.303464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62193 ] 00:13:37.551 [2024-11-20 13:33:29.485823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.810 [2024-11-20 13:33:29.591344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.810 [2024-11-20 13:33:29.591466] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:37.810 [2024-11-20 13:33:29.591496] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:37.811 [2024-11-20 13:33:29.591511] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.069 00:13:38.069 real 0m0.650s 00:13:38.069 user 0m0.411s 00:13:38.069 sys 0m0.133s 00:13:38.069 13:33:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.069 13:33:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:38.069 ************************************ 00:13:38.069 END TEST bdev_json_nonenclosed 00:13:38.069 ************************************ 00:13:38.069 13:33:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.069 13:33:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:38.069 13:33:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.069 13:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.069 ************************************ 00:13:38.069 START TEST bdev_json_nonarray 00:13:38.069 ************************************ 00:13:38.069 13:33:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.069 [2024-11-20 13:33:29.990817] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:38.069 [2024-11-20 13:33:29.990985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:13:38.328 [2024-11-20 13:33:30.166785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.328 [2024-11-20 13:33:30.272200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.328 [2024-11-20 13:33:30.272529] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:38.328 [2024-11-20 13:33:30.272657] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:38.328 [2024-11-20 13:33:30.272746] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.586 00:13:38.586 real 0m0.641s 00:13:38.586 user 0m0.404s 00:13:38.586 sys 0m0.129s 00:13:38.586 13:33:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.586 13:33:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 ************************************ 00:13:38.586 END TEST bdev_json_nonarray 00:13:38.586 ************************************ 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:38.586 13:33:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:38.586 00:13:38.586 real 0m44.335s 00:13:38.586 user 1m8.313s 00:13:38.586 sys 0m6.600s 00:13:38.586 13:33:30 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.586 13:33:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 ************************************ 00:13:38.586 END TEST blockdev_nvme 00:13:38.586 ************************************ 00:13:38.844 13:33:30 -- spdk/autotest.sh@209 -- # uname -s 00:13:38.844 13:33:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:38.844 13:33:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:38.844 13:33:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:38.844 13:33:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.844 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:13:38.844 ************************************ 00:13:38.844 START TEST blockdev_nvme_gpt 00:13:38.844 ************************************ 00:13:38.844 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:38.844 * Looking for test storage... 00:13:38.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:38.844 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:38.844 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:13:38.844 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:38.844 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.844 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.845 13:33:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:38.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.845 --rc genhtml_branch_coverage=1 00:13:38.845 --rc genhtml_function_coverage=1 00:13:38.845 --rc genhtml_legend=1 00:13:38.845 --rc geninfo_all_blocks=1 00:13:38.845 --rc geninfo_unexecuted_blocks=1 00:13:38.845 00:13:38.845 ' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:38.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.845 --rc genhtml_branch_coverage=1 00:13:38.845 --rc genhtml_function_coverage=1 00:13:38.845 --rc genhtml_legend=1 00:13:38.845 --rc geninfo_all_blocks=1 00:13:38.845 --rc geninfo_unexecuted_blocks=1 00:13:38.845 00:13:38.845 ' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:38.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.845 --rc genhtml_branch_coverage=1 00:13:38.845 --rc genhtml_function_coverage=1 00:13:38.845 --rc genhtml_legend=1 00:13:38.845 --rc geninfo_all_blocks=1 00:13:38.845 --rc geninfo_unexecuted_blocks=1 00:13:38.845 00:13:38.845 ' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:38.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.845 --rc genhtml_branch_coverage=1 00:13:38.845 --rc genhtml_function_coverage=1 00:13:38.845 --rc genhtml_legend=1 00:13:38.845 --rc geninfo_all_blocks=1 00:13:38.845 --rc geninfo_unexecuted_blocks=1 00:13:38.845 00:13:38.845 ' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62302 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62302 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62302 ']' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.845 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:39.104 [2024-11-20 13:33:30.965485] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:39.104 [2024-11-20 13:33:30.966175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:13:39.362 [2024-11-20 13:33:31.165246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.362 [2024-11-20 13:33:31.298001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.343 13:33:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.343 13:33:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:13:40.343 13:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:40.343 13:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:13:40.343 13:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:40.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:40.602 Waiting for block devices as requested 00:13:40.602 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.860 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.860 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.860 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:46.125 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:46.125 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:46.125 BYT; 00:13:46.126 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:46.126 BYT; 00:13:46.126 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:46.126 13:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:46.126 13:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.126 13:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.126 13:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.126 13:33:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.126 13:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.126 13:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:47.060 The operation has completed successfully. 00:13:47.060 13:33:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:48.433 The operation has completed successfully. 00:13:48.433 13:33:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:48.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:48.950 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.208 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.208 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:49.208 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.208 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.208 [] 00:13:49.208 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:49.208 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:49.208 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.208 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.466 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.466 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:49.466 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.466 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:49.725 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:49.725 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:49.726 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ed4e57bd-778b-4206-81ed-f315c38db02c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ed4e57bd-778b-4206-81ed-f315c38db02c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "574bc9ce-c6fc-40ee-bcc0-4c61e3e5f6a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "574bc9ce-c6fc-40ee-bcc0-4c61e3e5f6a1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "931d33ce-5ecb-4624-a432-c93d62f10f86"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "931d33ce-5ecb-4624-a432-c93d62f10f86",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "442837eb-2ac9-420c-9312-1f8e3d9be2b9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "442837eb-2ac9-420c-9312-1f8e3d9be2b9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "10a9695b-3a78-4411-b489-360a5aab5ebd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "10a9695b-3a78-4411-b489-360a5aab5ebd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:49.726 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:49.726 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:13:49.726 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:49.726 13:33:41 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62302 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62302 ']' 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62302 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62302 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.726 killing process with pid 62302 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62302' 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62302 00:13:49.726 13:33:41 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62302 00:13:52.277 13:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:52.277 13:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:52.277 13:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:52.277 13:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.277 13:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:52.277 ************************************ 00:13:52.277 START TEST bdev_hello_world 00:13:52.277 ************************************ 00:13:52.277 13:33:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:52.277 [2024-11-20 13:33:43.911202] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:52.277 [2024-11-20 13:33:43.911432] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62933 ] 00:13:52.277 [2024-11-20 13:33:44.093316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.277 [2024-11-20 13:33:44.196662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.844 [2024-11-20 13:33:44.813050] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:52.844 [2024-11-20 13:33:44.813111] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:52.844 [2024-11-20 13:33:44.813146] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:52.844 [2024-11-20 13:33:44.816160] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:52.844 [2024-11-20 13:33:44.816627] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:52.844 [2024-11-20 13:33:44.816668] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:52.844 [2024-11-20 13:33:44.816908] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:52.844 00:13:52.844 [2024-11-20 13:33:44.816954] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:53.780 00:13:53.780 real 0m2.008s 00:13:53.780 user 0m1.671s 00:13:53.780 sys 0m0.226s 00:13:53.780 13:33:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.780 13:33:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:53.780 ************************************ 00:13:53.780 END TEST bdev_hello_world 00:13:53.780 ************************************ 00:13:54.039 13:33:45 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:54.039 13:33:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.039 13:33:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.039 13:33:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:54.039 ************************************ 00:13:54.039 START TEST bdev_bounds 00:13:54.039 ************************************ 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62975 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62975' 00:13:54.039 Process bdevio pid: 62975 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62975 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62975 ']' 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.039 13:33:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:54.039 [2024-11-20 13:33:45.935080] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:54.039 [2024-11-20 13:33:45.935235] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:13:54.298 [2024-11-20 13:33:46.108486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.298 [2024-11-20 13:33:46.215397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.298 [2024-11-20 13:33:46.215514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.298 [2024-11-20 13:33:46.215524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.231 13:33:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.231 13:33:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:55.231 13:33:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:55.231 I/O targets: 00:13:55.231 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:55.231 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:55.231 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:55.231 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:55.231 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:55.231 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:55.231 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:55.231 00:13:55.231 00:13:55.231 CUnit - A unit testing framework for C - Version 2.1-3 00:13:55.231 http://cunit.sourceforge.net/ 00:13:55.231 00:13:55.231 00:13:55.231 Suite: bdevio tests on: Nvme3n1 00:13:55.231 Test: blockdev write read block ...passed 00:13:55.231 Test: blockdev write zeroes read block ...passed 00:13:55.231 Test: blockdev write zeroes read no split ...passed 00:13:55.231 Test: blockdev write zeroes read split ...passed 00:13:55.231 Test: blockdev write zeroes read split partial ...passed 00:13:55.231 Test: blockdev reset ...[2024-11-20 13:33:47.187429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:55.231 [2024-11-20 13:33:47.191559] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:55.231 passed 00:13:55.231 Test: blockdev write read 8 blocks ...passed 00:13:55.231 Test: blockdev write read size > 128k ...passed 00:13:55.231 Test: blockdev write read invalid size ...passed 00:13:55.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.231 Test: blockdev write read max offset ...passed 00:13:55.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.231 Test: blockdev writev readv 8 blocks ...passed 00:13:55.231 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.231 Test: blockdev writev readv block ...passed 00:13:55.231 Test: blockdev writev readv size > 128k ...passed 00:13:55.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.231 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.200833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e04000 len:0x1000 00:13:55.231 [2024-11-20 13:33:47.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.231 passed 00:13:55.231 Test: blockdev nvme passthru rw ...passed 00:13:55.231 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:33:47.201812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:55.231 [2024-11-20 13:33:47.201897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:55.231 passed 00:13:55.231 Test: blockdev nvme admin passthru ...passed 00:13:55.231 Test: blockdev copy ...passed 00:13:55.231 Suite: bdevio tests on: Nvme2n3 00:13:55.231 Test: blockdev write read block ...passed 00:13:55.231 Test: blockdev write zeroes read block ...passed 00:13:55.231 Test: blockdev write zeroes read no split ...passed 00:13:55.231 Test: blockdev write zeroes read split ...passed 00:13:55.489 Test: blockdev write zeroes read split partial ...passed 00:13:55.489 Test: blockdev reset ...[2024-11-20 13:33:47.275698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:55.489 [2024-11-20 13:33:47.280041] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:55.489 passed 00:13:55.489 Test: blockdev write read 8 blocks ...passed 00:13:55.489 Test: blockdev write read size > 128k ...passed 00:13:55.489 Test: blockdev write read invalid size ...passed 00:13:55.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.489 Test: blockdev write read max offset ...passed 00:13:55.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.489 Test: blockdev writev readv 8 blocks ...passed 00:13:55.489 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.489 Test: blockdev writev readv block ...passed 00:13:55.489 Test: blockdev writev readv size > 128k ...passed 00:13:55.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.489 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.291895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e02000 len:0x1000 00:13:55.489 [2024-11-20 13:33:47.291967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.489 passed 00:13:55.489 Test: blockdev nvme passthru rw ...passed 00:13:55.489 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:33:47.293222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:55.489 passed 00:13:55.489 Test: blockdev nvme admin passthru ...[2024-11-20 13:33:47.293296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:55.489 passed 00:13:55.489 Test: blockdev copy ...passed 00:13:55.490 Suite: bdevio tests on: Nvme2n2 00:13:55.490 Test: blockdev write read block ...passed 00:13:55.490 Test: blockdev write zeroes read block ...passed 00:13:55.490 Test: blockdev write zeroes read no split ...passed 00:13:55.490 Test: blockdev write zeroes read split ...passed 00:13:55.490 Test: blockdev write zeroes read split partial ...passed 00:13:55.490 Test: blockdev reset ...[2024-11-20 13:33:47.353305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:55.490 [2024-11-20 13:33:47.358197] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:55.490 passed 00:13:55.490 Test: blockdev write read 8 blocks ...passed 00:13:55.490 Test: blockdev write read size > 128k ...passed 00:13:55.490 Test: blockdev write read invalid size ...passed 00:13:55.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.490 Test: blockdev write read max offset ...passed 00:13:55.490 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.490 Test: blockdev writev readv 8 blocks ...passed 00:13:55.490 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.490 Test: blockdev writev readv block ...passed 00:13:55.490 Test: blockdev writev readv size > 128k ...passed 00:13:55.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.490 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.365424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca438000 len:0x1000 00:13:55.490 [2024-11-20 13:33:47.365486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.490 passed 00:13:55.490 Test: blockdev nvme passthru rw ...passed 00:13:55.490 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:33:47.366504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:55.490 passed 00:13:55.490 Test: blockdev nvme admin passthru ...[2024-11-20 13:33:47.366556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:55.490 passed 00:13:55.490 Test: blockdev copy ...passed 00:13:55.490 Suite: bdevio tests on: Nvme2n1 00:13:55.490 Test: blockdev write read block ...passed 00:13:55.490 Test: blockdev write zeroes read block ...passed 00:13:55.490 Test: blockdev write zeroes read no split ...passed 00:13:55.490 Test: blockdev write zeroes read split ...passed 00:13:55.490 Test: blockdev write zeroes read split partial ...passed 00:13:55.490 Test: blockdev reset ...[2024-11-20 13:33:47.439156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:55.490 [2024-11-20 13:33:47.443673] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:55.490 passed 00:13:55.490 Test: blockdev write read 8 blocks ...passed 00:13:55.490 Test: blockdev write read size > 128k ...passed 00:13:55.490 Test: blockdev write read invalid size ...passed 00:13:55.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.490 Test: blockdev write read max offset ...passed 00:13:55.490 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.490 Test: blockdev writev readv 8 blocks ...passed 00:13:55.490 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.490 Test: blockdev writev readv block ...passed 00:13:55.490 Test: blockdev writev readv size > 128k ...passed 00:13:55.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.490 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.450990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca434000 len:0x1000 00:13:55.490 passed 00:13:55.490 Test: blockdev nvme passthru rw ...[2024-11-20 13:33:47.451073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.490 passed 00:13:55.490 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:33:47.451859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:55.490 [2024-11-20 13:33:47.451929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:55.490 passed 00:13:55.490 Test: blockdev nvme admin passthru ...passed 00:13:55.490 Test: blockdev copy ...passed 00:13:55.490 Suite: bdevio tests on: Nvme1n1p2 00:13:55.490 Test: blockdev write read block ...passed 00:13:55.490 Test: blockdev write zeroes read block ...passed 00:13:55.490 Test: blockdev write zeroes read no split ...passed 00:13:55.490 Test: blockdev write zeroes read split ...passed 00:13:55.749 Test: blockdev write zeroes read split partial ...passed 00:13:55.749 Test: blockdev reset ...[2024-11-20 13:33:47.535037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:55.749 [2024-11-20 13:33:47.539098] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:55.749 passed 00:13:55.749 Test: blockdev write read 8 blocks ...passed 00:13:55.749 Test: blockdev write read size > 128k ...passed 00:13:55.749 Test: blockdev write read invalid size ...passed 00:13:55.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.749 Test: blockdev write read max offset ...passed 00:13:55.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.749 Test: blockdev writev readv 8 blocks ...passed 00:13:55.749 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.749 Test: blockdev writev readv block ...passed 00:13:55.749 Test: blockdev writev readv size > 128k ...passed 00:13:55.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.749 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.548528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ca430000 len:0x1000 00:13:55.749 [2024-11-20 13:33:47.548590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.749 passed 00:13:55.749 Test: blockdev nvme passthru rw ...passed 00:13:55.749 Test: blockdev nvme passthru vendor specific ...passed 00:13:55.749 Test: blockdev nvme admin passthru ...passed 00:13:55.749 Test: blockdev copy ...passed 00:13:55.749 Suite: bdevio tests on: Nvme1n1p1 00:13:55.749 Test: blockdev write read block ...passed 00:13:55.749 Test: blockdev write zeroes read block ...passed 00:13:55.749 Test: blockdev write zeroes read no split ...passed 00:13:55.749 Test: blockdev write zeroes read split ...passed 00:13:55.749 Test: blockdev write zeroes read split partial ...passed 00:13:55.749 Test: blockdev reset ...[2024-11-20 13:33:47.605726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:55.749 [2024-11-20 13:33:47.609290] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:55.749 passed 00:13:55.749 Test: blockdev write read 8 blocks ...passed 00:13:55.749 Test: blockdev write read size > 128k ...passed 00:13:55.749 Test: blockdev write read invalid size ...passed 00:13:55.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.749 Test: blockdev write read max offset ...passed 00:13:55.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.749 Test: blockdev writev readv 8 blocks ...passed 00:13:55.749 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.749 Test: blockdev writev readv block ...passed 00:13:55.749 Test: blockdev writev readv size > 128k ...passed 00:13:55.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.749 Test: blockdev comparev and writev ...[2024-11-20 13:33:47.618272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b800e000 len:0x1000 00:13:55.749 [2024-11-20 13:33:47.618343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:55.749 passed 00:13:55.749 Test: blockdev nvme passthru rw ...passed 00:13:55.749 Test: blockdev nvme passthru vendor specific ...passed 00:13:55.749 Test: blockdev nvme admin passthru ...passed 00:13:55.749 Test: blockdev copy ...passed 00:13:55.749 Suite: bdevio tests on: Nvme0n1 00:13:55.749 Test: blockdev write read block ...passed 00:13:55.749 Test: blockdev write zeroes read block ...passed 00:13:55.749 Test: blockdev write zeroes read no split ...passed 00:13:55.749 Test: blockdev write zeroes read split ...passed 00:13:55.749 Test: blockdev write zeroes read split partial ...passed 00:13:55.749 Test: blockdev reset ...[2024-11-20 13:33:47.679735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:55.749 [2024-11-20 13:33:47.683342] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:55.749 passed 00:13:55.749 Test: blockdev write read 8 blocks ...passed 00:13:55.749 Test: blockdev write read size > 128k ...passed 00:13:55.749 Test: blockdev write read invalid size ...passed 00:13:55.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:55.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:55.749 Test: blockdev write read max offset ...passed 00:13:55.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:55.749 Test: blockdev writev readv 8 blocks ...passed 00:13:55.749 Test: blockdev writev readv 30 x 1block ...passed 00:13:55.749 Test: blockdev writev readv block ...passed 00:13:55.749 Test: blockdev writev readv size > 128k ...passed 00:13:55.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:55.750 Test: blockdev comparev and writev ...passed 00:13:55.750 Test: blockdev nvme passthru rw ...[2024-11-20 13:33:47.689517] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:55.750 separate metadata which is not supported yet. 00:13:55.750 passed 00:13:55.750 Test: blockdev nvme passthru vendor specific ...passed 00:13:55.750 Test: blockdev nvme admin passthru ...[2024-11-20 13:33:47.690099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:55.750 [2024-11-20 13:33:47.690152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:55.750 passed 00:13:55.750 Test: blockdev copy ...passed 00:13:55.750 00:13:55.750 Run Summary: Type Total Ran Passed Failed Inactive 00:13:55.750 suites 7 7 n/a 0 0 00:13:55.750 tests 161 161 161 0 0 00:13:55.750 asserts 1025 1025 1025 0 n/a 00:13:55.750 00:13:55.750 Elapsed time = 1.590 seconds 00:13:55.750 0 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62975 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62975 ']' 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62975 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62975 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62975' 00:13:55.750 killing process with pid 62975 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62975 00:13:55.750 13:33:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62975 00:13:56.686 13:33:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:56.686 00:13:56.686 real 0m2.836s 00:13:56.686 user 0m7.503s 00:13:56.686 sys 0m0.391s 00:13:56.686 13:33:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.686 13:33:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 ************************************ 00:13:56.686 END TEST bdev_bounds 00:13:56.686 ************************************ 00:13:56.686 13:33:48 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:56.686 13:33:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:56.686 13:33:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.686 13:33:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 ************************************ 00:13:56.686 START TEST bdev_nbd 00:13:56.686 ************************************ 00:13:56.686 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:56.944 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:56.944 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:56.944 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63040 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63040 /var/tmp/spdk-nbd.sock 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63040 ']' 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.945 13:33:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:56.945 [2024-11-20 13:33:48.822631] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:13:56.945 [2024-11-20 13:33:48.822806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.204 [2024-11-20 13:33:49.001636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.204 [2024-11-20 13:33:49.130215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:58.141 13:33:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.141 1+0 records in 00:13:58.141 1+0 records out 00:13:58.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368834 s, 11.1 MB/s 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:58.141 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:58.142 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.707 1+0 records in 00:13:58.707 1+0 records out 00:13:58.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563125 s, 7.3 MB/s 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:58.707 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:58.966 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:58.966 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:58.966 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:58.966 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:58.966 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.967 1+0 records in 00:13:58.967 1+0 records out 00:13:58.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573742 s, 7.1 MB/s 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:58.967 13:33:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.225 1+0 records in 00:13:59.225 1+0 records out 00:13:59.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587788 s, 7.0 MB/s 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:59.225 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:59.484 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.742 1+0 records in 00:13:59.742 1+0 records out 00:13:59.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00436194 s, 939 kB/s 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:59.742 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.000 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.001 1+0 records in 00:14:00.001 1+0 records out 00:14:00.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625645 s, 6.5 MB/s 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:00.001 13:33:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.259 1+0 records in 00:14:00.259 1+0 records out 00:14:00.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650653 s, 6.3 MB/s 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:00.259 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:00.517 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:00.517 { 00:14:00.517 "nbd_device": "/dev/nbd0", 00:14:00.517 "bdev_name": "Nvme0n1" 00:14:00.517 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd1", 00:14:00.518 "bdev_name": "Nvme1n1p1" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd2", 00:14:00.518 "bdev_name": "Nvme1n1p2" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd3", 00:14:00.518 "bdev_name": "Nvme2n1" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd4", 00:14:00.518 "bdev_name": "Nvme2n2" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd5", 00:14:00.518 "bdev_name": "Nvme2n3" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd6", 00:14:00.518 "bdev_name": "Nvme3n1" 00:14:00.518 } 00:14:00.518 ]' 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd0", 00:14:00.518 "bdev_name": "Nvme0n1" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd1", 00:14:00.518 "bdev_name": "Nvme1n1p1" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd2", 00:14:00.518 "bdev_name": "Nvme1n1p2" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd3", 00:14:00.518 "bdev_name": "Nvme2n1" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd4", 00:14:00.518 "bdev_name": "Nvme2n2" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd5", 00:14:00.518 "bdev_name": "Nvme2n3" 00:14:00.518 }, 00:14:00.518 { 00:14:00.518 "nbd_device": "/dev/nbd6", 00:14:00.518 "bdev_name": "Nvme3n1" 00:14:00.518 } 00:14:00.518 ]' 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.518 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.085 13:33:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.344 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.602 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.860 13:33:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.425 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.682 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.940 13:33:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.198 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:03.456 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:03.714 /dev/nbd0 00:14:03.714 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.715 1+0 records in 00:14:03.715 1+0 records out 00:14:03.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614721 s, 6.7 MB/s 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:03.715 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:14:03.973 /dev/nbd1 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.973 1+0 records in 00:14:03.973 1+0 records out 00:14:03.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580908 s, 7.1 MB/s 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:03.973 13:33:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:14:04.539 /dev/nbd10 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.539 1+0 records in 00:14:04.539 1+0 records out 00:14:04.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679982 s, 6.0 MB/s 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:04.539 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:14:04.797 /dev/nbd11 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.797 1+0 records in 00:14:04.797 1+0 records out 00:14:04.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719221 s, 5.7 MB/s 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:04.797 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:14:05.055 /dev/nbd12 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.055 1+0 records in 00:14:05.055 1+0 records out 00:14:05.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582005 s, 7.0 MB/s 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.055 13:33:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:14:05.313 /dev/nbd13 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.313 1+0 records in 00:14:05.313 1+0 records out 00:14:05.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550405 s, 7.4 MB/s 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.313 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:14:05.572 /dev/nbd14 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.572 1+0 records in 00:14:05.572 1+0 records out 00:14:05.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718291 s, 5.7 MB/s 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.572 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:05.573 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.573 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd0", 00:14:06.139 "bdev_name": "Nvme0n1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd1", 00:14:06.139 "bdev_name": "Nvme1n1p1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd10", 00:14:06.139 "bdev_name": "Nvme1n1p2" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd11", 00:14:06.139 "bdev_name": "Nvme2n1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd12", 00:14:06.139 "bdev_name": "Nvme2n2" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd13", 00:14:06.139 "bdev_name": "Nvme2n3" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd14", 00:14:06.139 "bdev_name": "Nvme3n1" 00:14:06.139 } 00:14:06.139 ]' 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd0", 00:14:06.139 "bdev_name": "Nvme0n1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd1", 00:14:06.139 "bdev_name": "Nvme1n1p1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd10", 00:14:06.139 "bdev_name": "Nvme1n1p2" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd11", 00:14:06.139 "bdev_name": "Nvme2n1" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd12", 00:14:06.139 "bdev_name": "Nvme2n2" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd13", 00:14:06.139 "bdev_name": "Nvme2n3" 00:14:06.139 }, 00:14:06.139 { 00:14:06.139 "nbd_device": "/dev/nbd14", 00:14:06.139 "bdev_name": "Nvme3n1" 00:14:06.139 } 00:14:06.139 ]' 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:06.139 /dev/nbd1 00:14:06.139 /dev/nbd10 00:14:06.139 /dev/nbd11 00:14:06.139 /dev/nbd12 00:14:06.139 /dev/nbd13 00:14:06.139 /dev/nbd14' 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:06.139 /dev/nbd1 00:14:06.139 /dev/nbd10 00:14:06.139 /dev/nbd11 00:14:06.139 /dev/nbd12 00:14:06.139 /dev/nbd13 00:14:06.139 /dev/nbd14' 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:14:06.139 13:33:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:06.139 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:06.140 256+0 records in 00:14:06.140 256+0 records out 00:14:06.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578327 s, 181 MB/s 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:06.140 256+0 records in 00:14:06.140 256+0 records out 00:14:06.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133437 s, 7.9 MB/s 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.140 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:06.399 256+0 records in 00:14:06.399 256+0 records out 00:14:06.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14272 s, 7.3 MB/s 00:14:06.399 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.399 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:06.657 256+0 records in 00:14:06.657 256+0 records out 00:14:06.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14161 s, 7.4 MB/s 00:14:06.657 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.657 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:06.657 256+0 records in 00:14:06.658 256+0 records out 00:14:06.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136942 s, 7.7 MB/s 00:14:06.658 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.658 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:06.916 256+0 records in 00:14:06.916 256+0 records out 00:14:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147622 s, 7.1 MB/s 00:14:06.916 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.916 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:06.916 256+0 records in 00:14:06.916 256+0 records out 00:14:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14304 s, 7.3 MB/s 00:14:06.916 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.916 13:33:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:07.175 256+0 records in 00:14:07.175 256+0 records out 00:14:07.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150017 s, 7.0 MB/s 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.175 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.433 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:07.691 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.948 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.218 13:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:08.218 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.218 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.218 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.218 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:08.474 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:08.474 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.475 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.732 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:08.990 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:08.990 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:08.990 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:08.990 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.990 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.991 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:08.991 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.991 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.991 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.991 13:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.249 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:09.814 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:10.072 malloc_lvol_verify 00:14:10.072 13:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:10.329 a21d59a8-0b44-45a2-a753-62ac9744c1d3 00:14:10.329 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:10.587 9b786421-3471-4a86-b8aa-77fea846e843 00:14:10.587 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:10.845 /dev/nbd0 00:14:10.845 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:10.845 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:10.845 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:10.845 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:10.845 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:10.845 mke2fs 1.47.0 (5-Feb-2023) 00:14:10.845 Discarding device blocks: 0/4096 done 00:14:10.845 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:10.845 00:14:10.845 Allocating group tables: 0/1 done 00:14:10.845 Writing inode tables: 0/1 done 00:14:10.845 Creating journal (1024 blocks): done 00:14:11.102 Writing superblocks and filesystem accounting information: 0/1 done 00:14:11.102 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.102 13:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63040 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63040 ']' 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63040 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63040 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.360 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63040' 00:14:11.360 killing process with pid 63040 00:14:11.361 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63040 00:14:11.361 13:34:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63040 00:14:12.295 13:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:12.295 00:14:12.295 real 0m15.510s 00:14:12.295 user 0m22.754s 00:14:12.295 sys 0m4.703s 00:14:12.295 13:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.295 13:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 ************************************ 00:14:12.295 END TEST bdev_nbd 00:14:12.295 ************************************ 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:14:12.295 skipping fio tests on NVMe due to multi-ns failures. 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:12.295 13:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:12.295 13:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:12.295 13:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.295 13:34:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 ************************************ 00:14:12.295 START TEST bdev_verify 00:14:12.295 ************************************ 00:14:12.295 13:34:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:12.554 [2024-11-20 13:34:04.369939] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:12.554 [2024-11-20 13:34:04.370087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:14:12.554 [2024-11-20 13:34:04.545702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.813 [2024-11-20 13:34:04.652469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.813 [2024-11-20 13:34:04.652480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.379 Running I/O for 5 seconds... 00:14:15.700 20352.00 IOPS, 79.50 MiB/s [2024-11-20T13:34:08.674Z] 19872.00 IOPS, 77.62 MiB/s [2024-11-20T13:34:10.047Z] 19328.00 IOPS, 75.50 MiB/s [2024-11-20T13:34:10.613Z] 18880.00 IOPS, 73.75 MiB/s [2024-11-20T13:34:10.613Z] 18649.60 IOPS, 72.85 MiB/s 00:14:18.574 Latency(us) 00:14:18.574 [2024-11-20T13:34:10.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0xbd0bd 00:14:18.574 Nvme0n1 : 5.07 1261.26 4.93 0.00 0.00 101003.75 20971.52 109147.23 00:14:18.574 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:18.574 Nvme0n1 : 5.06 1353.72 5.29 0.00 0.00 94151.28 9175.04 101044.60 00:14:18.574 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0x4ff80 00:14:18.574 Nvme1n1p1 : 5.08 1260.79 4.92 0.00 0.00 100846.61 23473.80 103427.72 00:14:18.574 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x4ff80 length 0x4ff80 00:14:18.574 Nvme1n1p1 : 5.06 1353.21 5.29 0.00 0.00 94039.83 9294.20 98661.47 00:14:18.574 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0x4ff7f 00:14:18.574 Nvme1n1p2 : 5.08 1260.22 4.92 0.00 0.00 100714.54 25141.99 101997.85 00:14:18.574 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:14:18.574 Nvme1n1p2 : 5.08 1361.96 5.32 0.00 0.00 93505.89 10902.81 94371.84 00:14:18.574 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0x80000 00:14:18.574 Nvme2n1 : 5.09 1269.33 4.96 0.00 0.00 100036.44 7119.59 101044.60 00:14:18.574 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x80000 length 0x80000 00:14:18.574 Nvme2n1 : 5.08 1361.18 5.32 0.00 0.00 93391.02 12630.57 91988.71 00:14:18.574 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0x80000 00:14:18.574 Nvme2n2 : 5.09 1268.94 4.96 0.00 0.00 99885.02 7566.43 101997.85 00:14:18.574 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x80000 length 0x80000 00:14:18.574 Nvme2n2 : 5.08 1360.41 5.31 0.00 0.00 93276.65 14298.76 97231.59 00:14:18.574 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x0 length 0x80000 00:14:18.574 Nvme2n3 : 5.10 1268.57 4.96 0.00 0.00 99736.44 7804.74 104857.60 00:14:18.574 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.574 Verification LBA range: start 0x80000 length 0x80000 00:14:18.574 Nvme2n3 : 5.08 1359.94 5.31 0.00 0.00 93145.31 14179.61 100567.97 00:14:18.575 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:18.575 Verification LBA range: start 0x0 length 0x20000 00:14:18.575 Nvme3n1 : 5.10 1268.20 4.95 0.00 0.00 99605.39 8102.63 110100.48 00:14:18.575 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.575 Verification LBA range: start 0x20000 length 0x20000 00:14:18.575 Nvme3n1 : 5.08 1359.51 5.31 0.00 0.00 93023.69 12511.42 101044.60 00:14:18.575 [2024-11-20T13:34:10.614Z] =================================================================================================================== 00:14:18.575 [2024-11-20T13:34:10.614Z] Total : 18367.24 71.75 0.00 0.00 96765.11 7119.59 110100.48 00:14:19.948 00:14:19.948 real 0m7.589s 00:14:19.948 user 0m14.046s 00:14:19.948 sys 0m0.247s 00:14:19.948 13:34:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.948 ************************************ 00:14:19.948 13:34:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:19.948 END TEST bdev_verify 00:14:19.948 ************************************ 00:14:19.948 13:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:19.948 13:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:19.948 13:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.948 13:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:19.948 ************************************ 00:14:19.948 START TEST bdev_verify_big_io 00:14:19.948 ************************************ 00:14:19.948 13:34:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:20.207 [2024-11-20 13:34:12.003200] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:20.207 [2024-11-20 13:34:12.003454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63591 ] 00:14:20.207 [2024-11-20 13:34:12.201671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.465 [2024-11-20 13:34:12.382070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.465 [2024-11-20 13:34:12.382077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.400 Running I/O for 5 seconds... 00:14:27.213 1270.00 IOPS, 79.38 MiB/s [2024-11-20T13:34:19.510Z] 2332.50 IOPS, 145.78 MiB/s [2024-11-20T13:34:19.510Z] 2803.00 IOPS, 175.19 MiB/s 00:14:27.471 Latency(us) 00:14:27.471 [2024-11-20T13:34:19.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.471 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0xbd0b 00:14:27.471 Nvme0n1 : 5.81 110.22 6.89 0.00 0.00 1124097.68 21328.99 1159153.11 00:14:27.471 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:27.471 Nvme0n1 : 5.66 113.10 7.07 0.00 0.00 1083292.67 30146.56 1159153.11 00:14:27.471 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x4ff8 00:14:27.471 Nvme1n1p1 : 5.81 110.15 6.88 0.00 0.00 1095999.12 101521.22 991380.95 00:14:27.471 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x4ff8 length 0x4ff8 00:14:27.471 Nvme1n1p1 : 5.76 115.98 7.25 0.00 0.00 1031526.29 97708.22 991380.95 00:14:27.471 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x4ff7 00:14:27.471 Nvme1n1p2 : 5.82 110.03 6.88 0.00 0.00 1063261.28 135361.63 934185.89 00:14:27.471 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x4ff7 length 0x4ff7 00:14:27.471 Nvme1n1p2 : 5.82 121.00 7.56 0.00 0.00 972106.90 54811.93 880803.84 00:14:27.471 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x8000 00:14:27.471 Nvme2n1 : 5.92 113.04 7.07 0.00 0.00 1006435.97 94848.47 964689.92 00:14:27.471 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x8000 length 0x8000 00:14:27.471 Nvme2n1 : 5.82 120.94 7.56 0.00 0.00 942363.84 54811.93 915120.87 00:14:27.471 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x8000 00:14:27.471 Nvme2n2 : 5.98 117.80 7.36 0.00 0.00 944810.78 52667.11 1006632.96 00:14:27.471 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x8000 length 0x8000 00:14:27.471 Nvme2n2 : 5.96 123.92 7.74 0.00 0.00 887106.53 91988.71 945624.90 00:14:27.471 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x8000 00:14:27.471 Nvme2n3 : 6.02 122.01 7.63 0.00 0.00 887612.80 33602.09 1044763.00 00:14:27.471 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x8000 length 0x8000 00:14:27.471 Nvme2n3 : 5.99 133.03 8.31 0.00 0.00 811304.28 31933.91 991380.95 00:14:27.471 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x0 length 0x2000 00:14:27.471 Nvme3n1 : 6.05 88.42 5.53 0.00 0.00 1192135.03 10545.34 2364062.25 00:14:27.471 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:27.471 Verification LBA range: start 0x2000 length 0x2000 00:14:27.471 Nvme3n1 : 6.03 104.64 6.54 0.00 0.00 1007212.22 2338.44 2303054.20 00:14:27.471 [2024-11-20T13:34:19.510Z] =================================================================================================================== 00:14:27.471 [2024-11-20T13:34:19.510Z] Total : 1604.28 100.27 0.00 0.00 994928.10 2338.44 2364062.25 00:14:29.422 00:14:29.422 real 0m9.073s 00:14:29.422 user 0m16.851s 00:14:29.422 sys 0m0.312s 00:14:29.422 13:34:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.422 13:34:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 ************************************ 00:14:29.422 END TEST bdev_verify_big_io 00:14:29.422 ************************************ 00:14:29.422 13:34:21 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:29.422 13:34:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:29.422 13:34:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.422 13:34:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 ************************************ 00:14:29.422 START TEST bdev_write_zeroes 00:14:29.422 ************************************ 00:14:29.422 13:34:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:29.422 [2024-11-20 13:34:21.126498] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:29.422 [2024-11-20 13:34:21.126693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63706 ] 00:14:29.422 [2024-11-20 13:34:21.302956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.680 [2024-11-20 13:34:21.488888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.246 Running I/O for 1 seconds... 00:14:31.179 44352.00 IOPS, 173.25 MiB/s 00:14:31.179 Latency(us) 00:14:31.179 [2024-11-20T13:34:23.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.179 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.179 Nvme0n1 : 1.03 6337.87 24.76 0.00 0.00 20141.55 14894.55 43849.54 00:14:31.180 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme1n1p1 : 1.03 6329.21 24.72 0.00 0.00 20137.78 14656.23 44326.17 00:14:31.180 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme1n1p2 : 1.03 6320.60 24.69 0.00 0.00 20114.36 14596.65 44326.17 00:14:31.180 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme2n1 : 1.03 6312.76 24.66 0.00 0.00 20075.16 14834.97 42896.29 00:14:31.180 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme2n2 : 1.04 6305.01 24.63 0.00 0.00 20069.69 13941.29 43134.60 00:14:31.180 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme2n3 : 1.04 6297.19 24.60 0.00 0.00 20036.92 12332.68 43372.92 00:14:31.180 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:31.180 Nvme3n1 : 1.04 6289.49 24.57 0.00 0.00 20004.31 11081.54 43611.23 00:14:31.180 [2024-11-20T13:34:23.219Z] =================================================================================================================== 00:14:31.180 [2024-11-20T13:34:23.219Z] Total : 44192.13 172.63 0.00 0.00 20082.83 11081.54 44326.17 00:14:32.554 00:14:32.554 real 0m3.214s 00:14:32.554 user 0m2.854s 00:14:32.554 sys 0m0.232s 00:14:32.554 13:34:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.554 13:34:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:32.554 ************************************ 00:14:32.554 END TEST bdev_write_zeroes 00:14:32.554 ************************************ 00:14:32.554 13:34:24 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:32.554 13:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:32.554 13:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.554 13:34:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:32.554 ************************************ 00:14:32.554 START TEST bdev_json_nonenclosed 00:14:32.554 ************************************ 00:14:32.554 13:34:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:32.554 [2024-11-20 13:34:24.397522] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:32.554 [2024-11-20 13:34:24.397675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:14:32.554 [2024-11-20 13:34:24.570978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.813 [2024-11-20 13:34:24.742625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.813 [2024-11-20 13:34:24.742736] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:32.813 [2024-11-20 13:34:24.742763] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:32.813 [2024-11-20 13:34:24.742777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.071 00:14:33.071 real 0m0.723s 00:14:33.071 user 0m0.491s 00:14:33.072 sys 0m0.125s 00:14:33.072 13:34:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.072 13:34:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 ************************************ 00:14:33.072 END TEST bdev_json_nonenclosed 00:14:33.072 ************************************ 00:14:33.072 13:34:25 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:33.072 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:33.072 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.072 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 ************************************ 00:14:33.072 START TEST bdev_json_nonarray 00:14:33.072 ************************************ 00:14:33.072 13:34:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:33.330 [2024-11-20 13:34:25.151839] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:33.330 [2024-11-20 13:34:25.152065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63790 ] 00:14:33.330 [2024-11-20 13:34:25.324560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.589 [2024-11-20 13:34:25.427805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.589 [2024-11-20 13:34:25.427940] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:33.589 [2024-11-20 13:34:25.427972] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:33.589 [2024-11-20 13:34:25.427996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.848 00:14:33.849 real 0m0.649s 00:14:33.849 user 0m0.415s 00:14:33.849 sys 0m0.127s 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:33.849 ************************************ 00:14:33.849 END TEST bdev_json_nonarray 00:14:33.849 ************************************ 00:14:33.849 13:34:25 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:14:33.849 13:34:25 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:14:33.849 13:34:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:33.849 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:33.849 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.849 13:34:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:33.849 ************************************ 00:14:33.849 START TEST bdev_gpt_uuid 00:14:33.849 ************************************ 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63821 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63821 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63821 ']' 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.849 13:34:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:33.849 [2024-11-20 13:34:25.865215] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:33.849 [2024-11-20 13:34:25.865371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63821 ] 00:14:34.108 [2024-11-20 13:34:26.045215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.367 [2024-11-20 13:34:26.149801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.934 13:34:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.934 13:34:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:14:34.934 13:34:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:34.934 13:34:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.934 13:34:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:35.502 Some configs were skipped because the RPC state that can call them passed over. 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:14:35.502 { 00:14:35.502 "name": "Nvme1n1p1", 00:14:35.502 "aliases": [ 00:14:35.502 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:35.502 ], 00:14:35.502 "product_name": "GPT Disk", 00:14:35.502 "block_size": 4096, 00:14:35.502 "num_blocks": 655104, 00:14:35.502 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:35.502 "assigned_rate_limits": { 00:14:35.502 "rw_ios_per_sec": 0, 00:14:35.502 "rw_mbytes_per_sec": 0, 00:14:35.502 "r_mbytes_per_sec": 0, 00:14:35.502 "w_mbytes_per_sec": 0 00:14:35.502 }, 00:14:35.502 "claimed": false, 00:14:35.502 "zoned": false, 00:14:35.502 "supported_io_types": { 00:14:35.502 "read": true, 00:14:35.502 "write": true, 00:14:35.502 "unmap": true, 00:14:35.502 "flush": true, 00:14:35.502 "reset": true, 00:14:35.502 "nvme_admin": false, 00:14:35.502 "nvme_io": false, 00:14:35.502 "nvme_io_md": false, 00:14:35.502 "write_zeroes": true, 00:14:35.502 "zcopy": false, 00:14:35.502 "get_zone_info": false, 00:14:35.502 "zone_management": false, 00:14:35.502 "zone_append": false, 00:14:35.502 "compare": true, 00:14:35.502 "compare_and_write": false, 00:14:35.502 "abort": true, 00:14:35.502 "seek_hole": false, 00:14:35.502 "seek_data": false, 00:14:35.502 "copy": true, 00:14:35.502 "nvme_iov_md": false 00:14:35.502 }, 00:14:35.502 "driver_specific": { 00:14:35.502 "gpt": { 00:14:35.502 "base_bdev": "Nvme1n1", 00:14:35.502 "offset_blocks": 256, 00:14:35.502 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:35.502 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:35.502 "partition_name": "SPDK_TEST_first" 00:14:35.502 } 00:14:35.502 } 00:14:35.502 } 00:14:35.502 ]' 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:14:35.502 { 00:14:35.502 "name": "Nvme1n1p2", 00:14:35.502 "aliases": [ 00:14:35.502 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:35.502 ], 00:14:35.502 "product_name": "GPT Disk", 00:14:35.502 "block_size": 4096, 00:14:35.502 "num_blocks": 655103, 00:14:35.502 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:35.502 "assigned_rate_limits": { 00:14:35.502 "rw_ios_per_sec": 0, 00:14:35.502 "rw_mbytes_per_sec": 0, 00:14:35.502 "r_mbytes_per_sec": 0, 00:14:35.502 "w_mbytes_per_sec": 0 00:14:35.502 }, 00:14:35.502 "claimed": false, 00:14:35.502 "zoned": false, 00:14:35.502 "supported_io_types": { 00:14:35.502 "read": true, 00:14:35.502 "write": true, 00:14:35.502 "unmap": true, 00:14:35.502 "flush": true, 00:14:35.502 "reset": true, 00:14:35.502 "nvme_admin": false, 00:14:35.502 "nvme_io": false, 00:14:35.502 "nvme_io_md": false, 00:14:35.502 "write_zeroes": true, 00:14:35.502 "zcopy": false, 00:14:35.502 "get_zone_info": false, 00:14:35.502 "zone_management": false, 00:14:35.502 "zone_append": false, 00:14:35.502 "compare": true, 00:14:35.502 "compare_and_write": false, 00:14:35.502 "abort": true, 00:14:35.502 "seek_hole": false, 00:14:35.502 "seek_data": false, 00:14:35.502 "copy": true, 00:14:35.502 "nvme_iov_md": false 00:14:35.502 }, 00:14:35.502 "driver_specific": { 00:14:35.502 "gpt": { 00:14:35.502 "base_bdev": "Nvme1n1", 00:14:35.502 "offset_blocks": 655360, 00:14:35.502 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:35.502 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:35.502 "partition_name": "SPDK_TEST_second" 00:14:35.502 } 00:14:35.502 } 00:14:35.502 } 00:14:35.502 ]' 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:14:35.502 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63821 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63821 ']' 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63821 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63821 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.761 killing process with pid 63821 00:14:35.761 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.762 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63821' 00:14:35.762 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63821 00:14:35.762 13:34:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63821 00:14:38.288 00:14:38.288 real 0m4.088s 00:14:38.288 user 0m4.502s 00:14:38.288 sys 0m0.483s 00:14:38.288 13:34:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.288 13:34:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:38.288 ************************************ 00:14:38.288 END TEST bdev_gpt_uuid 00:14:38.288 ************************************ 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:14:38.288 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:14:38.289 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:14:38.289 13:34:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:38.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:38.289 Waiting for block devices as requested 00:14:38.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:38.546 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:38.546 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:38.546 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.812 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:43.812 13:34:35 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:14:43.812 13:34:35 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:14:44.070 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:44.070 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:44.070 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:44.070 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:44.070 13:34:35 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:14:44.070 00:14:44.070 real 1m5.272s 00:14:44.070 user 1m25.505s 00:14:44.070 sys 0m9.684s 00:14:44.070 13:34:35 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.070 13:34:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:44.070 ************************************ 00:14:44.070 END TEST blockdev_nvme_gpt 00:14:44.070 ************************************ 00:14:44.070 13:34:35 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:44.070 13:34:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.070 13:34:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.070 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:14:44.070 ************************************ 00:14:44.070 START TEST nvme 00:14:44.070 ************************************ 00:14:44.070 13:34:35 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:44.070 * Looking for test storage... 00:14:44.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:44.070 13:34:36 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.070 13:34:36 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.070 13:34:36 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.328 13:34:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.328 13:34:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.328 13:34:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.328 13:34:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.328 13:34:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.328 13:34:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:44.328 13:34:36 nvme -- scripts/common.sh@345 -- # : 1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.328 13:34:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.328 13:34:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@353 -- # local d=1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.328 13:34:36 nvme -- scripts/common.sh@355 -- # echo 1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.328 13:34:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@353 -- # local d=2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.328 13:34:36 nvme -- scripts/common.sh@355 -- # echo 2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.328 13:34:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.328 13:34:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.328 13:34:36 nvme -- scripts/common.sh@368 -- # return 0 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.328 --rc genhtml_branch_coverage=1 00:14:44.328 --rc genhtml_function_coverage=1 00:14:44.328 --rc genhtml_legend=1 00:14:44.328 --rc geninfo_all_blocks=1 00:14:44.328 --rc geninfo_unexecuted_blocks=1 00:14:44.328 00:14:44.328 ' 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.328 --rc genhtml_branch_coverage=1 00:14:44.328 --rc genhtml_function_coverage=1 00:14:44.328 --rc genhtml_legend=1 00:14:44.328 --rc geninfo_all_blocks=1 00:14:44.328 --rc geninfo_unexecuted_blocks=1 00:14:44.328 00:14:44.328 ' 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.328 --rc genhtml_branch_coverage=1 00:14:44.328 --rc genhtml_function_coverage=1 00:14:44.328 --rc genhtml_legend=1 00:14:44.328 --rc geninfo_all_blocks=1 00:14:44.328 --rc geninfo_unexecuted_blocks=1 00:14:44.328 00:14:44.328 ' 00:14:44.328 13:34:36 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.328 --rc genhtml_branch_coverage=1 00:14:44.328 --rc genhtml_function_coverage=1 00:14:44.328 --rc genhtml_legend=1 00:14:44.328 --rc geninfo_all_blocks=1 00:14:44.328 --rc geninfo_unexecuted_blocks=1 00:14:44.328 00:14:44.328 ' 00:14:44.328 13:34:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:44.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:45.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:45.413 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:45.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:45.413 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:45.413 13:34:37 nvme -- nvme/nvme.sh@79 -- # uname 00:14:45.413 13:34:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:14:45.413 13:34:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:14:45.413 13:34:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1075 -- # stubpid=64468 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:14:45.413 Waiting for stub to ready for secondary processes... 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64468 ]] 00:14:45.413 13:34:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:45.413 [2024-11-20 13:34:37.404333] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:14:45.413 [2024-11-20 13:34:37.404507] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:14:46.349 [2024-11-20 13:34:38.194240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.349 [2024-11-20 13:34:38.297636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.349 [2024-11-20 13:34:38.297711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.349 [2024-11-20 13:34:38.297713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.349 [2024-11-20 13:34:38.316180] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:14:46.349 [2024-11-20 13:34:38.316247] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:46.349 [2024-11-20 13:34:38.325716] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:46.349 [2024-11-20 13:34:38.325916] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:46.349 [2024-11-20 13:34:38.328614] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:46.349 [2024-11-20 13:34:38.328976] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:14:46.349 [2024-11-20 13:34:38.329130] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:14:46.349 [2024-11-20 13:34:38.331772] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:46.349 [2024-11-20 13:34:38.332080] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:14:46.349 [2024-11-20 13:34:38.332228] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:14:46.349 [2024-11-20 13:34:38.335047] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:46.349 [2024-11-20 13:34:38.335341] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:14:46.349 [2024-11-20 13:34:38.335497] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:14:46.349 [2024-11-20 13:34:38.335654] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:14:46.349 [2024-11-20 13:34:38.335770] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:14:46.349 13:34:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:46.349 done. 00:14:46.349 13:34:38 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:14:46.349 13:34:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:46.349 13:34:38 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:14:46.349 13:34:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.349 13:34:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.349 ************************************ 00:14:46.349 START TEST nvme_reset 00:14:46.349 ************************************ 00:14:46.349 13:34:38 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:46.917 Initializing NVMe Controllers 00:14:46.917 Skipping QEMU NVMe SSD at 0000:00:10.0 00:14:46.917 Skipping QEMU NVMe SSD at 0000:00:11.0 00:14:46.917 Skipping QEMU NVMe SSD at 0000:00:13.0 00:14:46.917 Skipping QEMU NVMe SSD at 0000:00:12.0 00:14:46.917 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:14:46.917 00:14:46.917 real 0m0.379s 00:14:46.917 user 0m0.132s 00:14:46.917 sys 0m0.202s 00:14:46.917 13:34:38 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.917 13:34:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:14:46.917 ************************************ 00:14:46.917 END TEST nvme_reset 00:14:46.917 ************************************ 00:14:46.917 13:34:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:14:46.917 13:34:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.917 13:34:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.917 13:34:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.917 ************************************ 00:14:46.917 START TEST nvme_identify 00:14:46.917 ************************************ 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:14:46.917 13:34:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:14:46.917 13:34:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:14:46.917 13:34:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:14:46.917 13:34:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:46.917 13:34:38 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:46.917 13:34:38 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:14:47.179 [2024-11-20 13:34:39.207277] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64485 terminated unexpected 00:14:47.179 ===================================================== 00:14:47.179 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:47.179 ===================================================== 00:14:47.179 Controller Capabilities/Features 00:14:47.179 ================================ 00:14:47.179 Vendor ID: 1b36 00:14:47.179 Subsystem Vendor ID: 1af4 00:14:47.179 Serial Number: 12340 00:14:47.179 Model Number: QEMU NVMe Ctrl 00:14:47.179 Firmware Version: 8.0.0 00:14:47.179 Recommended Arb Burst: 6 00:14:47.179 IEEE OUI Identifier: 00 54 52 00:14:47.179 Multi-path I/O 00:14:47.179 May have multiple subsystem ports: No 00:14:47.179 May have multiple controllers: No 00:14:47.179 Associated with SR-IOV VF: No 00:14:47.179 Max Data Transfer Size: 524288 00:14:47.179 Max Number of Namespaces: 256 00:14:47.179 Max Number of I/O Queues: 64 00:14:47.179 NVMe Specification Version (VS): 1.4 00:14:47.179 NVMe Specification Version (Identify): 1.4 00:14:47.179 Maximum Queue Entries: 2048 00:14:47.179 Contiguous Queues Required: Yes 00:14:47.179 Arbitration Mechanisms Supported 00:14:47.179 Weighted Round Robin: Not Supported 00:14:47.179 Vendor Specific: Not Supported 00:14:47.179 Reset Timeout: 7500 ms 00:14:47.179 Doorbell Stride: 4 bytes 00:14:47.179 NVM Subsystem Reset: Not Supported 00:14:47.179 Command Sets Supported 00:14:47.179 NVM Command Set: Supported 00:14:47.179 Boot Partition: Not Supported 00:14:47.179 Memory Page Size Minimum: 4096 bytes 00:14:47.179 Memory Page Size Maximum: 65536 bytes 00:14:47.179 Persistent Memory Region: Not Supported 00:14:47.179 Optional Asynchronous Events Supported 00:14:47.179 Namespace Attribute Notices: Supported 00:14:47.179 Firmware Activation Notices: Not Supported 00:14:47.179 ANA Change Notices: Not Supported 00:14:47.179 PLE Aggregate Log Change Notices: Not Supported 00:14:47.179 LBA Status Info Alert Notices: Not Supported 00:14:47.179 EGE Aggregate Log Change Notices: Not Supported 00:14:47.179 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.179 Zone Descriptor Change Notices: Not Supported 00:14:47.179 Discovery Log Change Notices: Not Supported 00:14:47.179 Controller Attributes 00:14:47.179 128-bit Host Identifier: Not Supported 00:14:47.179 Non-Operational Permissive Mode: Not Supported 00:14:47.179 NVM Sets: Not Supported 00:14:47.179 Read Recovery Levels: Not Supported 00:14:47.179 Endurance Groups: Not Supported 00:14:47.179 Predictable Latency Mode: Not Supported 00:14:47.179 Traffic Based Keep ALive: Not Supported 00:14:47.179 Namespace Granularity: Not Supported 00:14:47.179 SQ Associations: Not Supported 00:14:47.179 UUID List: Not Supported 00:14:47.179 Multi-Domain Subsystem: Not Supported 00:14:47.179 Fixed Capacity Management: Not Supported 00:14:47.179 Variable Capacity Management: Not Supported 00:14:47.179 Delete Endurance Group: Not Supported 00:14:47.179 Delete NVM Set: Not Supported 00:14:47.179 Extended LBA Formats Supported: Supported 00:14:47.179 Flexible Data Placement Supported: Not Supported 00:14:47.179 00:14:47.179 Controller Memory Buffer Support 00:14:47.179 ================================ 00:14:47.179 Supported: No 00:14:47.179 00:14:47.179 Persistent Memory Region Support 00:14:47.179 ================================ 00:14:47.179 Supported: No 00:14:47.179 00:14:47.179 Admin Command Set Attributes 00:14:47.179 ============================ 00:14:47.179 Security Send/Receive: Not Supported 00:14:47.179 Format NVM: Supported 00:14:47.179 Firmware Activate/Download: Not Supported 00:14:47.179 Namespace Management: Supported 00:14:47.179 Device Self-Test: Not Supported 00:14:47.179 Directives: Supported 00:14:47.179 NVMe-MI: Not Supported 00:14:47.179 Virtualization Management: Not Supported 00:14:47.179 Doorbell Buffer Config: Supported 00:14:47.179 Get LBA Status Capability: Not Supported 00:14:47.179 Command & Feature Lockdown Capability: Not Supported 00:14:47.179 Abort Command Limit: 4 00:14:47.179 Async Event Request Limit: 4 00:14:47.179 Number of Firmware Slots: N/A 00:14:47.179 Firmware Slot 1 Read-Only: N/A 00:14:47.179 Firmware Activation Without Reset: N/A 00:14:47.179 Multiple Update Detection Support: N/A 00:14:47.179 Firmware Update Granularity: No Information Provided 00:14:47.179 Per-Namespace SMART Log: Yes 00:14:47.179 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.179 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:47.179 Command Effects Log Page: Supported 00:14:47.179 Get Log Page Extended Data: Supported 00:14:47.179 Telemetry Log Pages: Not Supported 00:14:47.179 Persistent Event Log Pages: Not Supported 00:14:47.179 Supported Log Pages Log Page: May Support 00:14:47.179 Commands Supported & Effects Log Page: Not Supported 00:14:47.179 Feature Identifiers & Effects Log Page:May Support 00:14:47.179 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.179 Data Area 4 for Telemetry Log: Not Supported 00:14:47.179 Error Log Page Entries Supported: 1 00:14:47.179 Keep Alive: Not Supported 00:14:47.179 00:14:47.179 NVM Command Set Attributes 00:14:47.179 ========================== 00:14:47.179 Submission Queue Entry Size 00:14:47.179 Max: 64 00:14:47.179 Min: 64 00:14:47.179 Completion Queue Entry Size 00:14:47.179 Max: 16 00:14:47.179 Min: 16 00:14:47.179 Number of Namespaces: 256 00:14:47.179 Compare Command: Supported 00:14:47.179 Write Uncorrectable Command: Not Supported 00:14:47.179 Dataset Management Command: Supported 00:14:47.179 Write Zeroes Command: Supported 00:14:47.179 Set Features Save Field: Supported 00:14:47.179 Reservations: Not Supported 00:14:47.179 Timestamp: Supported 00:14:47.179 Copy: Supported 00:14:47.179 Volatile Write Cache: Present 00:14:47.179 Atomic Write Unit (Normal): 1 00:14:47.179 Atomic Write Unit (PFail): 1 00:14:47.179 Atomic Compare & Write Unit: 1 00:14:47.179 Fused Compare & Write: Not Supported 00:14:47.179 Scatter-Gather List 00:14:47.179 SGL Command Set: Supported 00:14:47.179 SGL Keyed: Not Supported 00:14:47.179 SGL Bit Bucket Descriptor: Not Supported 00:14:47.179 SGL Metadata Pointer: Not Supported 00:14:47.179 Oversized SGL: Not Supported 00:14:47.179 SGL Metadata Address: Not Supported 00:14:47.179 SGL Offset: Not Supported 00:14:47.179 Transport SGL Data Block: Not Supported 00:14:47.179 Replay Protected Memory Block: Not Supported 00:14:47.179 00:14:47.179 Firmware Slot Information 00:14:47.179 ========================= 00:14:47.179 Active slot: 1 00:14:47.179 Slot 1 Firmware Revision: 1.0 00:14:47.179 00:14:47.179 00:14:47.179 Commands Supported and Effects 00:14:47.179 ============================== 00:14:47.179 Admin Commands 00:14:47.179 -------------- 00:14:47.179 Delete I/O Submission Queue (00h): Supported 00:14:47.179 Create I/O Submission Queue (01h): Supported 00:14:47.179 Get Log Page (02h): Supported 00:14:47.179 Delete I/O Completion Queue (04h): Supported 00:14:47.179 Create I/O Completion Queue (05h): Supported 00:14:47.179 Identify (06h): Supported 00:14:47.179 Abort (08h): Supported 00:14:47.179 Set Features (09h): Supported 00:14:47.179 Get Features (0Ah): Supported 00:14:47.179 Asynchronous Event Request (0Ch): Supported 00:14:47.179 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:47.179 Directive Send (19h): Supported 00:14:47.179 Directive Receive (1Ah): Supported 00:14:47.179 Virtualization Management (1Ch): Supported 00:14:47.179 Doorbell Buffer Config (7Ch): Supported 00:14:47.179 Format NVM (80h): Supported LBA-Change 00:14:47.179 I/O Commands 00:14:47.179 ------------ 00:14:47.179 Flush (00h): Supported LBA-Change 00:14:47.179 Write (01h): Supported LBA-Change 00:14:47.179 Read (02h): Supported 00:14:47.179 Compare (05h): Supported 00:14:47.179 Write Zeroes (08h): Supported LBA-Change 00:14:47.179 Dataset Management (09h): Supported LBA-Change 00:14:47.179 Unknown (0Ch): Supported 00:14:47.179 Unknown (12h): Supported 00:14:47.179 Copy (19h): Supported LBA-Change 00:14:47.179 Unknown (1Dh): Supported LBA-Change 00:14:47.179 00:14:47.179 Error Log 00:14:47.179 ========= 00:14:47.179 00:14:47.180 Arbitration 00:14:47.180 =========== 00:14:47.180 Arbitration Burst: no limit 00:14:47.180 00:14:47.180 Power Management 00:14:47.180 ================ 00:14:47.180 Number of Power States: 1 00:14:47.180 Current Power State: Power State #0 00:14:47.180 Power State #0: 00:14:47.180 Max Power: 25.00 W 00:14:47.180 Non-Operational State: Operational 00:14:47.180 Entry Latency: 16 microseconds 00:14:47.180 Exit Latency: 4 microseconds 00:14:47.180 Relative Read Throughput: 0 00:14:47.180 Relative Read Latency: 0 00:14:47.180 Relative Write Throughput: 0 00:14:47.180 Relative Write Latency: 0 00:14:47.180 Idle Power[2024-11-20 13:34:39.208829] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64485 terminated unexpected 00:14:47.180 : Not Reported 00:14:47.180 Active Power: Not Reported 00:14:47.180 Non-Operational Permissive Mode: Not Supported 00:14:47.180 00:14:47.180 Health Information 00:14:47.180 ================== 00:14:47.180 Critical Warnings: 00:14:47.180 Available Spare Space: OK 00:14:47.180 Temperature: OK 00:14:47.180 Device Reliability: OK 00:14:47.180 Read Only: No 00:14:47.180 Volatile Memory Backup: OK 00:14:47.180 Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.180 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:47.180 Available Spare: 0% 00:14:47.180 Available Spare Threshold: 0% 00:14:47.180 Life Percentage Used: 0% 00:14:47.180 Data Units Read: 649 00:14:47.180 Data Units Written: 577 00:14:47.180 Host Read Commands: 32150 00:14:47.180 Host Write Commands: 31936 00:14:47.180 Controller Busy Time: 0 minutes 00:14:47.180 Power Cycles: 0 00:14:47.180 Power On Hours: 0 hours 00:14:47.180 Unsafe Shutdowns: 0 00:14:47.180 Unrecoverable Media Errors: 0 00:14:47.180 Lifetime Error Log Entries: 0 00:14:47.180 Warning Temperature Time: 0 minutes 00:14:47.180 Critical Temperature Time: 0 minutes 00:14:47.180 00:14:47.180 Number of Queues 00:14:47.180 ================ 00:14:47.180 Number of I/O Submission Queues: 64 00:14:47.180 Number of I/O Completion Queues: 64 00:14:47.180 00:14:47.180 ZNS Specific Controller Data 00:14:47.180 ============================ 00:14:47.180 Zone Append Size Limit: 0 00:14:47.180 00:14:47.180 00:14:47.180 Active Namespaces 00:14:47.180 ================= 00:14:47.180 Namespace ID:1 00:14:47.180 Error Recovery Timeout: Unlimited 00:14:47.180 Command Set Identifier: NVM (00h) 00:14:47.180 Deallocate: Supported 00:14:47.180 Deallocated/Unwritten Error: Supported 00:14:47.180 Deallocated Read Value: All 0x00 00:14:47.180 Deallocate in Write Zeroes: Not Supported 00:14:47.180 Deallocated Guard Field: 0xFFFF 00:14:47.180 Flush: Supported 00:14:47.180 Reservation: Not Supported 00:14:47.180 Metadata Transferred as: Separate Metadata Buffer 00:14:47.180 Namespace Sharing Capabilities: Private 00:14:47.180 Size (in LBAs): 1548666 (5GiB) 00:14:47.180 Capacity (in LBAs): 1548666 (5GiB) 00:14:47.180 Utilization (in LBAs): 1548666 (5GiB) 00:14:47.180 Thin Provisioning: Not Supported 00:14:47.180 Per-NS Atomic Units: No 00:14:47.180 Maximum Single Source Range Length: 128 00:14:47.180 Maximum Copy Length: 128 00:14:47.180 Maximum Source Range Count: 128 00:14:47.180 NGUID/EUI64 Never Reused: No 00:14:47.180 Namespace Write Protected: No 00:14:47.180 Number of LBA Formats: 8 00:14:47.180 Current LBA Format: LBA Format #07 00:14:47.180 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.180 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.180 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.180 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.180 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.180 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.180 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.180 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.180 00:14:47.180 NVM Specific Namespace Data 00:14:47.180 =========================== 00:14:47.180 Logical Block Storage Tag Mask: 0 00:14:47.180 Protection Information Capabilities: 00:14:47.180 16b Guard Protection Information Storage Tag Support: No 00:14:47.180 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.180 Storage Tag Check Read Support: No 00:14:47.180 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.180 ===================================================== 00:14:47.180 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:47.180 ===================================================== 00:14:47.180 Controller Capabilities/Features 00:14:47.180 ================================ 00:14:47.180 Vendor ID: 1b36 00:14:47.180 Subsystem Vendor ID: 1af4 00:14:47.180 Serial Number: 12341 00:14:47.180 Model Number: QEMU NVMe Ctrl 00:14:47.180 Firmware Version: 8.0.0 00:14:47.180 Recommended Arb Burst: 6 00:14:47.180 IEEE OUI Identifier: 00 54 52 00:14:47.180 Multi-path I/O 00:14:47.180 May have multiple subsystem ports: No 00:14:47.180 May have multiple controllers: No 00:14:47.180 Associated with SR-IOV VF: No 00:14:47.180 Max Data Transfer Size: 524288 00:14:47.180 Max Number of Namespaces: 256 00:14:47.180 Max Number of I/O Queues: 64 00:14:47.180 NVMe Specification Version (VS): 1.4 00:14:47.180 NVMe Specification Version (Identify): 1.4 00:14:47.180 Maximum Queue Entries: 2048 00:14:47.180 Contiguous Queues Required: Yes 00:14:47.180 Arbitration Mechanisms Supported 00:14:47.180 Weighted Round Robin: Not Supported 00:14:47.180 Vendor Specific: Not Supported 00:14:47.180 Reset Timeout: 7500 ms 00:14:47.180 Doorbell Stride: 4 bytes 00:14:47.180 NVM Subsystem Reset: Not Supported 00:14:47.180 Command Sets Supported 00:14:47.180 NVM Command Set: Supported 00:14:47.180 Boot Partition: Not Supported 00:14:47.180 Memory Page Size Minimum: 4096 bytes 00:14:47.180 Memory Page Size Maximum: 65536 bytes 00:14:47.180 Persistent Memory Region: Not Supported 00:14:47.180 Optional Asynchronous Events Supported 00:14:47.180 Namespace Attribute Notices: Supported 00:14:47.180 Firmware Activation Notices: Not Supported 00:14:47.180 ANA Change Notices: Not Supported 00:14:47.180 PLE Aggregate Log Change Notices: Not Supported 00:14:47.180 LBA Status Info Alert Notices: Not Supported 00:14:47.180 EGE Aggregate Log Change Notices: Not Supported 00:14:47.180 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.180 Zone Descriptor Change Notices: Not Supported 00:14:47.180 Discovery Log Change Notices: Not Supported 00:14:47.180 Controller Attributes 00:14:47.180 128-bit Host Identifier: Not Supported 00:14:47.180 Non-Operational Permissive Mode: Not Supported 00:14:47.180 NVM Sets: Not Supported 00:14:47.180 Read Recovery Levels: Not Supported 00:14:47.180 Endurance Groups: Not Supported 00:14:47.180 Predictable Latency Mode: Not Supported 00:14:47.180 Traffic Based Keep ALive: Not Supported 00:14:47.180 Namespace Granularity: Not Supported 00:14:47.180 SQ Associations: Not Supported 00:14:47.180 UUID List: Not Supported 00:14:47.180 Multi-Domain Subsystem: Not Supported 00:14:47.180 Fixed Capacity Management: Not Supported 00:14:47.180 Variable Capacity Management: Not Supported 00:14:47.180 Delete Endurance Group: Not Supported 00:14:47.180 Delete NVM Set: Not Supported 00:14:47.180 Extended LBA Formats Supported: Supported 00:14:47.180 Flexible Data Placement Supported: Not Supported 00:14:47.180 00:14:47.180 Controller Memory Buffer Support 00:14:47.180 ================================ 00:14:47.180 Supported: No 00:14:47.180 00:14:47.180 Persistent Memory Region Support 00:14:47.180 ================================ 00:14:47.180 Supported: No 00:14:47.180 00:14:47.180 Admin Command Set Attributes 00:14:47.180 ============================ 00:14:47.180 Security Send/Receive: Not Supported 00:14:47.180 Format NVM: Supported 00:14:47.180 Firmware Activate/Download: Not Supported 00:14:47.180 Namespace Management: Supported 00:14:47.180 Device Self-Test: Not Supported 00:14:47.180 Directives: Supported 00:14:47.180 NVMe-MI: Not Supported 00:14:47.180 Virtualization Management: Not Supported 00:14:47.180 Doorbell Buffer Config: Supported 00:14:47.180 Get LBA Status Capability: Not Supported 00:14:47.180 Command & Feature Lockdown Capability: Not Supported 00:14:47.181 Abort Command Limit: 4 00:14:47.181 Async Event Request Limit: 4 00:14:47.181 Number of Firmware Slots: N/A 00:14:47.181 Firmware Slot 1 Read-Only: N/A 00:14:47.181 Firmware Activation Without Reset: N/A 00:14:47.181 Multiple Update Detection Support: N/A 00:14:47.181 Firmware Update Granularity: No Information Provided 00:14:47.181 Per-Namespace SMART Log: Yes 00:14:47.181 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.181 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:47.181 Command Effects Log Page: Supported 00:14:47.181 Get Log Page Extended Data: Supported 00:14:47.181 Telemetry Log Pages: Not Supported 00:14:47.181 Persistent Event Log Pages: Not Supported 00:14:47.181 Supported Log Pages Log Page: May Support 00:14:47.181 Commands Supported & Effects Log Page: Not Supported 00:14:47.181 Feature Identifiers & Effects Log Page:May Support 00:14:47.181 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.181 Data Area 4 for Telemetry Log: Not Supported 00:14:47.181 Error Log Page Entries Supported: 1 00:14:47.181 Keep Alive: Not Supported 00:14:47.181 00:14:47.181 NVM Command Set Attributes 00:14:47.181 ========================== 00:14:47.181 Submission Queue Entry Size 00:14:47.181 Max: 64 00:14:47.181 Min: 64 00:14:47.181 Completion Queue Entry Size 00:14:47.181 Max: 16 00:14:47.181 Min: 16 00:14:47.181 Number of Namespaces: 256 00:14:47.181 Compare Command: Supported 00:14:47.181 Write Uncorrectable Command: Not Supported 00:14:47.181 Dataset Management Command: Supported 00:14:47.181 Write Zeroes Command: Supported 00:14:47.181 Set Features Save Field: Supported 00:14:47.181 Reservations: Not Supported 00:14:47.181 Timestamp: Supported 00:14:47.181 Copy: Supported 00:14:47.181 Volatile Write Cache: Present 00:14:47.181 Atomic Write Unit (Normal): 1 00:14:47.181 Atomic Write Unit (PFail): 1 00:14:47.181 Atomic Compare & Write Unit: 1 00:14:47.181 Fused Compare & Write: Not Supported 00:14:47.181 Scatter-Gather List 00:14:47.181 SGL Command Set: Supported 00:14:47.181 SGL Keyed: Not Supported 00:14:47.181 SGL Bit Bucket Descriptor: Not Supported 00:14:47.181 SGL Metadata Pointer: Not Supported 00:14:47.181 Oversized SGL: Not Supported 00:14:47.181 SGL Metadata Address: Not Supported 00:14:47.181 SGL Offset: Not Supported 00:14:47.181 Transport SGL Data Block: Not Supported 00:14:47.181 Replay Protected Memory Block: Not Supported 00:14:47.181 00:14:47.181 Firmware Slot Information 00:14:47.181 ========================= 00:14:47.181 Active slot: 1 00:14:47.181 Slot 1 Firmware Revision: 1.0 00:14:47.181 00:14:47.181 00:14:47.181 Commands Supported and Effects 00:14:47.181 ============================== 00:14:47.181 Admin Commands 00:14:47.181 -------------- 00:14:47.181 Delete I/O Submission Queue (00h): Supported 00:14:47.181 Create I/O Submission Queue (01h): Supported 00:14:47.181 Get Log Page (02h): Supported 00:14:47.181 Delete I/O Completion Queue (04h): Supported 00:14:47.181 Create I/O Completion Queue (05h): Supported 00:14:47.181 Identify (06h): Supported 00:14:47.181 Abort (08h): Supported 00:14:47.181 Set Features (09h): Supported 00:14:47.181 Get Features (0Ah): Supported 00:14:47.181 Asynchronous Event Request (0Ch): Supported 00:14:47.181 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:47.181 Directive Send (19h): Supported 00:14:47.181 Directive Receive (1Ah): Supported 00:14:47.181 Virtualization Management (1Ch): Supported 00:14:47.181 Doorbell Buffer Config (7Ch): Supported 00:14:47.181 Format NVM (80h): Supported LBA-Change 00:14:47.181 I/O Commands 00:14:47.181 ------------ 00:14:47.181 Flush (00h): Supported LBA-Change 00:14:47.181 Write (01h): Supported LBA-Change 00:14:47.181 Read (02h): Supported 00:14:47.181 Compare (05h): Supported 00:14:47.181 Write Zeroes (08h): Supported LBA-Change 00:14:47.181 Dataset Management (09h): Supported LBA-Change 00:14:47.181 Unknown (0Ch): Supported 00:14:47.181 Unknown (12h): Supported 00:14:47.181 Copy (19h): Supported LBA-Change 00:14:47.181 Unknown (1Dh): Supported LBA-Change 00:14:47.181 00:14:47.181 Error Log 00:14:47.181 ========= 00:14:47.181 00:14:47.181 Arbitration 00:14:47.181 =========== 00:14:47.181 Arbitration Burst: no limit 00:14:47.181 00:14:47.181 Power Management 00:14:47.181 ================ 00:14:47.181 Number of Power States: 1 00:14:47.181 Current Power State: Power State #0 00:14:47.181 Power State #0: 00:14:47.181 Max Power: 25.00 W 00:14:47.181 Non-Operational State: Operational 00:14:47.181 Entry Latency: 16 microseconds 00:14:47.181 Exit Latency: 4 microseconds 00:14:47.181 Relative Read Throughput: 0 00:14:47.181 Relative Read Latency: 0 00:14:47.181 Relative Write Throughput: 0 00:14:47.181 Relative Write Latency: 0 00:14:47.181 Idle Power: Not Reported 00:14:47.181 Active Power: Not Reported 00:14:47.181 Non-Operational Permissive Mode: Not Supported 00:14:47.181 00:14:47.181 Health Information 00:14:47.181 ================== 00:14:47.181 Critical Warnings: 00:14:47.181 Available Spare Space: OK 00:14:47.181 Temperature: [2024-11-20 13:34:39.209972] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64485 terminated unexpected 00:14:47.181 OK 00:14:47.181 Device Reliability: OK 00:14:47.181 Read Only: No 00:14:47.181 Volatile Memory Backup: OK 00:14:47.181 Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.181 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:47.181 Available Spare: 0% 00:14:47.181 Available Spare Threshold: 0% 00:14:47.181 Life Percentage Used: 0% 00:14:47.181 Data Units Read: 998 00:14:47.181 Data Units Written: 865 00:14:47.181 Host Read Commands: 47881 00:14:47.181 Host Write Commands: 46665 00:14:47.181 Controller Busy Time: 0 minutes 00:14:47.181 Power Cycles: 0 00:14:47.181 Power On Hours: 0 hours 00:14:47.181 Unsafe Shutdowns: 0 00:14:47.181 Unrecoverable Media Errors: 0 00:14:47.181 Lifetime Error Log Entries: 0 00:14:47.181 Warning Temperature Time: 0 minutes 00:14:47.181 Critical Temperature Time: 0 minutes 00:14:47.181 00:14:47.181 Number of Queues 00:14:47.181 ================ 00:14:47.181 Number of I/O Submission Queues: 64 00:14:47.181 Number of I/O Completion Queues: 64 00:14:47.181 00:14:47.181 ZNS Specific Controller Data 00:14:47.181 ============================ 00:14:47.181 Zone Append Size Limit: 0 00:14:47.181 00:14:47.181 00:14:47.181 Active Namespaces 00:14:47.181 ================= 00:14:47.181 Namespace ID:1 00:14:47.181 Error Recovery Timeout: Unlimited 00:14:47.181 Command Set Identifier: NVM (00h) 00:14:47.181 Deallocate: Supported 00:14:47.181 Deallocated/Unwritten Error: Supported 00:14:47.181 Deallocated Read Value: All 0x00 00:14:47.181 Deallocate in Write Zeroes: Not Supported 00:14:47.181 Deallocated Guard Field: 0xFFFF 00:14:47.181 Flush: Supported 00:14:47.181 Reservation: Not Supported 00:14:47.181 Namespace Sharing Capabilities: Private 00:14:47.181 Size (in LBAs): 1310720 (5GiB) 00:14:47.181 Capacity (in LBAs): 1310720 (5GiB) 00:14:47.181 Utilization (in LBAs): 1310720 (5GiB) 00:14:47.181 Thin Provisioning: Not Supported 00:14:47.181 Per-NS Atomic Units: No 00:14:47.181 Maximum Single Source Range Length: 128 00:14:47.181 Maximum Copy Length: 128 00:14:47.181 Maximum Source Range Count: 128 00:14:47.181 NGUID/EUI64 Never Reused: No 00:14:47.181 Namespace Write Protected: No 00:14:47.181 Number of LBA Formats: 8 00:14:47.181 Current LBA Format: LBA Format #04 00:14:47.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.181 00:14:47.181 NVM Specific Namespace Data 00:14:47.181 =========================== 00:14:47.181 Logical Block Storage Tag Mask: 0 00:14:47.181 Protection Information Capabilities: 00:14:47.181 16b Guard Protection Information Storage Tag Support: No 00:14:47.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.181 Storage Tag Check Read Support: No 00:14:47.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.182 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.182 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.182 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.182 ===================================================== 00:14:47.182 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:47.182 ===================================================== 00:14:47.182 Controller Capabilities/Features 00:14:47.182 ================================ 00:14:47.182 Vendor ID: 1b36 00:14:47.182 Subsystem Vendor ID: 1af4 00:14:47.182 Serial Number: 12343 00:14:47.182 Model Number: QEMU NVMe Ctrl 00:14:47.182 Firmware Version: 8.0.0 00:14:47.182 Recommended Arb Burst: 6 00:14:47.182 IEEE OUI Identifier: 00 54 52 00:14:47.182 Multi-path I/O 00:14:47.182 May have multiple subsystem ports: No 00:14:47.182 May have multiple controllers: Yes 00:14:47.182 Associated with SR-IOV VF: No 00:14:47.182 Max Data Transfer Size: 524288 00:14:47.182 Max Number of Namespaces: 256 00:14:47.182 Max Number of I/O Queues: 64 00:14:47.182 NVMe Specification Version (VS): 1.4 00:14:47.182 NVMe Specification Version (Identify): 1.4 00:14:47.182 Maximum Queue Entries: 2048 00:14:47.182 Contiguous Queues Required: Yes 00:14:47.182 Arbitration Mechanisms Supported 00:14:47.182 Weighted Round Robin: Not Supported 00:14:47.182 Vendor Specific: Not Supported 00:14:47.182 Reset Timeout: 7500 ms 00:14:47.182 Doorbell Stride: 4 bytes 00:14:47.182 NVM Subsystem Reset: Not Supported 00:14:47.182 Command Sets Supported 00:14:47.182 NVM Command Set: Supported 00:14:47.182 Boot Partition: Not Supported 00:14:47.182 Memory Page Size Minimum: 4096 bytes 00:14:47.182 Memory Page Size Maximum: 65536 bytes 00:14:47.182 Persistent Memory Region: Not Supported 00:14:47.182 Optional Asynchronous Events Supported 00:14:47.182 Namespace Attribute Notices: Supported 00:14:47.182 Firmware Activation Notices: Not Supported 00:14:47.182 ANA Change Notices: Not Supported 00:14:47.182 PLE Aggregate Log Change Notices: Not Supported 00:14:47.182 LBA Status Info Alert Notices: Not Supported 00:14:47.182 EGE Aggregate Log Change Notices: Not Supported 00:14:47.182 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.182 Zone Descriptor Change Notices: Not Supported 00:14:47.182 Discovery Log Change Notices: Not Supported 00:14:47.182 Controller Attributes 00:14:47.182 128-bit Host Identifier: Not Supported 00:14:47.182 Non-Operational Permissive Mode: Not Supported 00:14:47.182 NVM Sets: Not Supported 00:14:47.182 Read Recovery Levels: Not Supported 00:14:47.182 Endurance Groups: Supported 00:14:47.182 Predictable Latency Mode: Not Supported 00:14:47.182 Traffic Based Keep ALive: Not Supported 00:14:47.182 Namespace Granularity: Not Supported 00:14:47.182 SQ Associations: Not Supported 00:14:47.182 UUID List: Not Supported 00:14:47.182 Multi-Domain Subsystem: Not Supported 00:14:47.182 Fixed Capacity Management: Not Supported 00:14:47.182 Variable Capacity Management: Not Supported 00:14:47.182 Delete Endurance Group: Not Supported 00:14:47.182 Delete NVM Set: Not Supported 00:14:47.182 Extended LBA Formats Supported: Supported 00:14:47.182 Flexible Data Placement Supported: Supported 00:14:47.182 00:14:47.182 Controller Memory Buffer Support 00:14:47.182 ================================ 00:14:47.182 Supported: No 00:14:47.182 00:14:47.182 Persistent Memory Region Support 00:14:47.182 ================================ 00:14:47.182 Supported: No 00:14:47.182 00:14:47.182 Admin Command Set Attributes 00:14:47.182 ============================ 00:14:47.182 Security Send/Receive: Not Supported 00:14:47.182 Format NVM: Supported 00:14:47.182 Firmware Activate/Download: Not Supported 00:14:47.182 Namespace Management: Supported 00:14:47.182 Device Self-Test: Not Supported 00:14:47.182 Directives: Supported 00:14:47.182 NVMe-MI: Not Supported 00:14:47.182 Virtualization Management: Not Supported 00:14:47.182 Doorbell Buffer Config: Supported 00:14:47.182 Get LBA Status Capability: Not Supported 00:14:47.182 Command & Feature Lockdown Capability: Not Supported 00:14:47.182 Abort Command Limit: 4 00:14:47.182 Async Event Request Limit: 4 00:14:47.182 Number of Firmware Slots: N/A 00:14:47.182 Firmware Slot 1 Read-Only: N/A 00:14:47.182 Firmware Activation Without Reset: N/A 00:14:47.182 Multiple Update Detection Support: N/A 00:14:47.182 Firmware Update Granularity: No Information Provided 00:14:47.182 Per-Namespace SMART Log: Yes 00:14:47.182 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.182 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:47.182 Command Effects Log Page: Supported 00:14:47.182 Get Log Page Extended Data: Supported 00:14:47.182 Telemetry Log Pages: Not Supported 00:14:47.182 Persistent Event Log Pages: Not Supported 00:14:47.182 Supported Log Pages Log Page: May Support 00:14:47.182 Commands Supported & Effects Log Page: Not Supported 00:14:47.182 Feature Identifiers & Effects Log Page:May Support 00:14:47.182 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.182 Data Area 4 for Telemetry Log: Not Supported 00:14:47.182 Error Log Page Entries Supported: 1 00:14:47.182 Keep Alive: Not Supported 00:14:47.182 00:14:47.182 NVM Command Set Attributes 00:14:47.182 ========================== 00:14:47.182 Submission Queue Entry Size 00:14:47.182 Max: 64 00:14:47.182 Min: 64 00:14:47.182 Completion Queue Entry Size 00:14:47.182 Max: 16 00:14:47.182 Min: 16 00:14:47.182 Number of Namespaces: 256 00:14:47.182 Compare Command: Supported 00:14:47.182 Write Uncorrectable Command: Not Supported 00:14:47.182 Dataset Management Command: Supported 00:14:47.182 Write Zeroes Command: Supported 00:14:47.182 Set Features Save Field: Supported 00:14:47.182 Reservations: Not Supported 00:14:47.182 Timestamp: Supported 00:14:47.182 Copy: Supported 00:14:47.182 Volatile Write Cache: Present 00:14:47.182 Atomic Write Unit (Normal): 1 00:14:47.182 Atomic Write Unit (PFail): 1 00:14:47.182 Atomic Compare & Write Unit: 1 00:14:47.182 Fused Compare & Write: Not Supported 00:14:47.182 Scatter-Gather List 00:14:47.182 SGL Command Set: Supported 00:14:47.182 SGL Keyed: Not Supported 00:14:47.182 SGL Bit Bucket Descriptor: Not Supported 00:14:47.182 SGL Metadata Pointer: Not Supported 00:14:47.182 Oversized SGL: Not Supported 00:14:47.182 SGL Metadata Address: Not Supported 00:14:47.182 SGL Offset: Not Supported 00:14:47.182 Transport SGL Data Block: Not Supported 00:14:47.182 Replay Protected Memory Block: Not Supported 00:14:47.182 00:14:47.182 Firmware Slot Information 00:14:47.182 ========================= 00:14:47.182 Active slot: 1 00:14:47.182 Slot 1 Firmware Revision: 1.0 00:14:47.182 00:14:47.182 00:14:47.182 Commands Supported and Effects 00:14:47.182 ============================== 00:14:47.182 Admin Commands 00:14:47.182 -------------- 00:14:47.182 Delete I/O Submission Queue (00h): Supported 00:14:47.182 Create I/O Submission Queue (01h): Supported 00:14:47.182 Get Log Page (02h): Supported 00:14:47.182 Delete I/O Completion Queue (04h): Supported 00:14:47.182 Create I/O Completion Queue (05h): Supported 00:14:47.182 Identify (06h): Supported 00:14:47.182 Abort (08h): Supported 00:14:47.182 Set Features (09h): Supported 00:14:47.182 Get Features (0Ah): Supported 00:14:47.182 Asynchronous Event Request (0Ch): Supported 00:14:47.182 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:47.182 Directive Send (19h): Supported 00:14:47.182 Directive Receive (1Ah): Supported 00:14:47.182 Virtualization Management (1Ch): Supported 00:14:47.182 Doorbell Buffer Config (7Ch): Supported 00:14:47.182 Format NVM (80h): Supported LBA-Change 00:14:47.182 I/O Commands 00:14:47.182 ------------ 00:14:47.182 Flush (00h): Supported LBA-Change 00:14:47.182 Write (01h): Supported LBA-Change 00:14:47.182 Read (02h): Supported 00:14:47.182 Compare (05h): Supported 00:14:47.182 Write Zeroes (08h): Supported LBA-Change 00:14:47.182 Dataset Management (09h): Supported LBA-Change 00:14:47.182 Unknown (0Ch): Supported 00:14:47.182 Unknown (12h): Supported 00:14:47.182 Copy (19h): Supported LBA-Change 00:14:47.182 Unknown (1Dh): Supported LBA-Change 00:14:47.182 00:14:47.182 Error Log 00:14:47.182 ========= 00:14:47.182 00:14:47.182 Arbitration 00:14:47.182 =========== 00:14:47.182 Arbitration Burst: no limit 00:14:47.182 00:14:47.182 Power Management 00:14:47.182 ================ 00:14:47.182 Number of Power States: 1 00:14:47.182 Current Power State: Power State #0 00:14:47.182 Power State #0: 00:14:47.182 Max Power: 25.00 W 00:14:47.182 Non-Operational State: Operational 00:14:47.182 Entry Latency: 16 microseconds 00:14:47.182 Exit Latency: 4 microseconds 00:14:47.182 Relative Read Throughput: 0 00:14:47.183 Relative Read Latency: 0 00:14:47.183 Relative Write Throughput: 0 00:14:47.183 Relative Write Latency: 0 00:14:47.183 Idle Power: Not Reported 00:14:47.183 Active Power: Not Reported 00:14:47.183 Non-Operational Permissive Mode: Not Supported 00:14:47.183 00:14:47.183 Health Information 00:14:47.183 ================== 00:14:47.183 Critical Warnings: 00:14:47.183 Available Spare Space: OK 00:14:47.183 Temperature: OK 00:14:47.183 Device Reliability: OK 00:14:47.183 Read Only: No 00:14:47.183 Volatile Memory Backup: OK 00:14:47.183 Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.183 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:47.183 Available Spare: 0% 00:14:47.183 Available Spare Threshold: 0% 00:14:47.183 Life Percentage Used: 0% 00:14:47.183 Data Units Read: 691 00:14:47.183 Data Units Written: 620 00:14:47.183 Host Read Commands: 32718 00:14:47.183 Host Write Commands: 32141 00:14:47.183 Controller Busy Time: 0 minutes 00:14:47.183 Power Cycles: 0 00:14:47.183 Power On Hours: 0 hours 00:14:47.183 Unsafe Shutdowns: 0 00:14:47.183 Unrecoverable Media Errors: 0 00:14:47.183 Lifetime Error Log Entries: 0 00:14:47.183 Warning Temperature Time: 0 minutes 00:14:47.183 Critical Temperature Time: 0 minutes 00:14:47.183 00:14:47.183 Number of Queues 00:14:47.183 ================ 00:14:47.183 Number of I/O Submission Queues: 64 00:14:47.183 Number of I/O Completion Queues: 64 00:14:47.183 00:14:47.183 ZNS Specific Controller Data 00:14:47.183 ============================ 00:14:47.183 Zone Append Size Limit: 0 00:14:47.183 00:14:47.183 00:14:47.183 Active Namespaces 00:14:47.183 ================= 00:14:47.183 Namespace ID:1 00:14:47.183 Error Recovery Timeout: Unlimited 00:14:47.183 Command Set Identifier: NVM (00h) 00:14:47.183 Deallocate: Supported 00:14:47.183 Deallocated/Unwritten Error: Supported 00:14:47.183 Deallocated Read Value: All 0x00 00:14:47.183 Deallocate in Write Zeroes: Not Supported 00:14:47.183 Deallocated Guard Field: 0xFFFF 00:14:47.183 Flush: Supported 00:14:47.183 Reservation: Not Supported 00:14:47.183 Namespace Sharing Capabilities: Multiple Controllers 00:14:47.183 Size (in LBAs): 262144 (1GiB) 00:14:47.183 Capacity (in LBAs): 262144 (1GiB) 00:14:47.183 Utilization (in LBAs): 262144 (1GiB) 00:14:47.183 Thin Provisioning: Not Supported 00:14:47.183 Per-NS Atomic Units: No 00:14:47.183 Maximum Single Source Range Length: 128 00:14:47.183 Maximum Copy Length: 128 00:14:47.183 Maximum Source Range Count: 128 00:14:47.183 NGUID/EUI64 Never Reused: No 00:14:47.183 Namespace Write Protected: No 00:14:47.183 Endurance group ID: 1 00:14:47.183 Number of LBA Formats: 8 00:14:47.183 Current LBA Format: LBA Format #04 00:14:47.183 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.183 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.183 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.183 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.183 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.183 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.183 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.183 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.183 00:14:47.183 Get Feature FDP: 00:14:47.183 ================ 00:14:47.183 Enabled: Yes 00:14:47.183 FDP configuration index: 0 00:14:47.183 00:14:47.183 FDP configurations log page 00:14:47.183 =========================== 00:14:47.183 Number of FDP configurations: 1 00:14:47.183 Version: 0 00:14:47.183 Size: 112 00:14:47.183 FDP Configuration Descriptor: 0 00:14:47.183 Descriptor Size: 96 00:14:47.183 Reclaim Group Identifier format: 2 00:14:47.183 FDP Volatile Write Cache: Not Present 00:14:47.183 FDP Configuration: Valid 00:14:47.183 Vendor Specific Size: 0 00:14:47.183 Number of Reclaim Groups: 2 00:14:47.183 Number of Recalim Unit Handles: 8 00:14:47.183 Max Placement Identifiers: 128 00:14:47.183 Number of Namespaces Suppprted: 256 00:14:47.183 Reclaim unit Nominal Size: 6000000 bytes 00:14:47.183 Estimated Reclaim Unit Time Limit: Not Reported 00:14:47.183 RUH Desc #000: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #001: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #002: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #003: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #004: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #005: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #006: RUH Type: Initially Isolated 00:14:47.183 RUH Desc #007: RUH Type: Initially Isolated 00:14:47.183 00:14:47.183 FDP reclaim unit handle usage log page 00:14:47.183 ====================================== 00:14:47.183 Number of Reclaim Unit Handles: 8 00:14:47.183 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:47.183 RUH Usage Desc #001: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #002: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #003: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #004: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #005: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #006: RUH Attributes: Unused 00:14:47.183 RUH Usage Desc #007: RUH Attributes: Unused 00:14:47.183 00:14:47.183 FDP statistics log page 00:14:47.183 ======================= 00:14:47.183 Host bytes with metadata written: 385101824 00:14:47.183 Media[2024-11-20 13:34:39.211930] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64485 terminated unexpected 00:14:47.183 bytes with metadata written: 385171456 00:14:47.183 Media bytes erased: 0 00:14:47.183 00:14:47.183 FDP events log page 00:14:47.183 =================== 00:14:47.183 Number of FDP events: 0 00:14:47.183 00:14:47.183 NVM Specific Namespace Data 00:14:47.183 =========================== 00:14:47.183 Logical Block Storage Tag Mask: 0 00:14:47.183 Protection Information Capabilities: 00:14:47.183 16b Guard Protection Information Storage Tag Support: No 00:14:47.183 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.183 Storage Tag Check Read Support: No 00:14:47.183 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.183 ===================================================== 00:14:47.183 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:47.183 ===================================================== 00:14:47.183 Controller Capabilities/Features 00:14:47.183 ================================ 00:14:47.183 Vendor ID: 1b36 00:14:47.183 Subsystem Vendor ID: 1af4 00:14:47.183 Serial Number: 12342 00:14:47.183 Model Number: QEMU NVMe Ctrl 00:14:47.183 Firmware Version: 8.0.0 00:14:47.183 Recommended Arb Burst: 6 00:14:47.183 IEEE OUI Identifier: 00 54 52 00:14:47.183 Multi-path I/O 00:14:47.183 May have multiple subsystem ports: No 00:14:47.183 May have multiple controllers: No 00:14:47.183 Associated with SR-IOV VF: No 00:14:47.183 Max Data Transfer Size: 524288 00:14:47.183 Max Number of Namespaces: 256 00:14:47.183 Max Number of I/O Queues: 64 00:14:47.183 NVMe Specification Version (VS): 1.4 00:14:47.183 NVMe Specification Version (Identify): 1.4 00:14:47.183 Maximum Queue Entries: 2048 00:14:47.183 Contiguous Queues Required: Yes 00:14:47.183 Arbitration Mechanisms Supported 00:14:47.183 Weighted Round Robin: Not Supported 00:14:47.183 Vendor Specific: Not Supported 00:14:47.183 Reset Timeout: 7500 ms 00:14:47.183 Doorbell Stride: 4 bytes 00:14:47.184 NVM Subsystem Reset: Not Supported 00:14:47.184 Command Sets Supported 00:14:47.184 NVM Command Set: Supported 00:14:47.184 Boot Partition: Not Supported 00:14:47.184 Memory Page Size Minimum: 4096 bytes 00:14:47.184 Memory Page Size Maximum: 65536 bytes 00:14:47.184 Persistent Memory Region: Not Supported 00:14:47.184 Optional Asynchronous Events Supported 00:14:47.184 Namespace Attribute Notices: Supported 00:14:47.184 Firmware Activation Notices: Not Supported 00:14:47.184 ANA Change Notices: Not Supported 00:14:47.184 PLE Aggregate Log Change Notices: Not Supported 00:14:47.184 LBA Status Info Alert Notices: Not Supported 00:14:47.184 EGE Aggregate Log Change Notices: Not Supported 00:14:47.184 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.184 Zone Descriptor Change Notices: Not Supported 00:14:47.184 Discovery Log Change Notices: Not Supported 00:14:47.184 Controller Attributes 00:14:47.184 128-bit Host Identifier: Not Supported 00:14:47.184 Non-Operational Permissive Mode: Not Supported 00:14:47.184 NVM Sets: Not Supported 00:14:47.184 Read Recovery Levels: Not Supported 00:14:47.184 Endurance Groups: Not Supported 00:14:47.184 Predictable Latency Mode: Not Supported 00:14:47.184 Traffic Based Keep ALive: Not Supported 00:14:47.184 Namespace Granularity: Not Supported 00:14:47.184 SQ Associations: Not Supported 00:14:47.184 UUID List: Not Supported 00:14:47.184 Multi-Domain Subsystem: Not Supported 00:14:47.184 Fixed Capacity Management: Not Supported 00:14:47.184 Variable Capacity Management: Not Supported 00:14:47.184 Delete Endurance Group: Not Supported 00:14:47.184 Delete NVM Set: Not Supported 00:14:47.184 Extended LBA Formats Supported: Supported 00:14:47.184 Flexible Data Placement Supported: Not Supported 00:14:47.184 00:14:47.184 Controller Memory Buffer Support 00:14:47.184 ================================ 00:14:47.184 Supported: No 00:14:47.184 00:14:47.184 Persistent Memory Region Support 00:14:47.184 ================================ 00:14:47.184 Supported: No 00:14:47.184 00:14:47.184 Admin Command Set Attributes 00:14:47.184 ============================ 00:14:47.184 Security Send/Receive: Not Supported 00:14:47.184 Format NVM: Supported 00:14:47.184 Firmware Activate/Download: Not Supported 00:14:47.184 Namespace Management: Supported 00:14:47.184 Device Self-Test: Not Supported 00:14:47.184 Directives: Supported 00:14:47.184 NVMe-MI: Not Supported 00:14:47.184 Virtualization Management: Not Supported 00:14:47.184 Doorbell Buffer Config: Supported 00:14:47.184 Get LBA Status Capability: Not Supported 00:14:47.184 Command & Feature Lockdown Capability: Not Supported 00:14:47.184 Abort Command Limit: 4 00:14:47.184 Async Event Request Limit: 4 00:14:47.184 Number of Firmware Slots: N/A 00:14:47.184 Firmware Slot 1 Read-Only: N/A 00:14:47.184 Firmware Activation Without Reset: N/A 00:14:47.184 Multiple Update Detection Support: N/A 00:14:47.184 Firmware Update Granularity: No Information Provided 00:14:47.184 Per-Namespace SMART Log: Yes 00:14:47.184 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.184 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:47.184 Command Effects Log Page: Supported 00:14:47.184 Get Log Page Extended Data: Supported 00:14:47.184 Telemetry Log Pages: Not Supported 00:14:47.184 Persistent Event Log Pages: Not Supported 00:14:47.184 Supported Log Pages Log Page: May Support 00:14:47.184 Commands Supported & Effects Log Page: Not Supported 00:14:47.184 Feature Identifiers & Effects Log Page:May Support 00:14:47.184 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.184 Data Area 4 for Telemetry Log: Not Supported 00:14:47.184 Error Log Page Entries Supported: 1 00:14:47.184 Keep Alive: Not Supported 00:14:47.184 00:14:47.184 NVM Command Set Attributes 00:14:47.184 ========================== 00:14:47.184 Submission Queue Entry Size 00:14:47.184 Max: 64 00:14:47.184 Min: 64 00:14:47.184 Completion Queue Entry Size 00:14:47.184 Max: 16 00:14:47.184 Min: 16 00:14:47.184 Number of Namespaces: 256 00:14:47.184 Compare Command: Supported 00:14:47.184 Write Uncorrectable Command: Not Supported 00:14:47.184 Dataset Management Command: Supported 00:14:47.184 Write Zeroes Command: Supported 00:14:47.184 Set Features Save Field: Supported 00:14:47.184 Reservations: Not Supported 00:14:47.184 Timestamp: Supported 00:14:47.184 Copy: Supported 00:14:47.184 Volatile Write Cache: Present 00:14:47.184 Atomic Write Unit (Normal): 1 00:14:47.184 Atomic Write Unit (PFail): 1 00:14:47.184 Atomic Compare & Write Unit: 1 00:14:47.184 Fused Compare & Write: Not Supported 00:14:47.184 Scatter-Gather List 00:14:47.184 SGL Command Set: Supported 00:14:47.184 SGL Keyed: Not Supported 00:14:47.184 SGL Bit Bucket Descriptor: Not Supported 00:14:47.184 SGL Metadata Pointer: Not Supported 00:14:47.184 Oversized SGL: Not Supported 00:14:47.184 SGL Metadata Address: Not Supported 00:14:47.184 SGL Offset: Not Supported 00:14:47.184 Transport SGL Data Block: Not Supported 00:14:47.184 Replay Protected Memory Block: Not Supported 00:14:47.184 00:14:47.184 Firmware Slot Information 00:14:47.184 ========================= 00:14:47.184 Active slot: 1 00:14:47.184 Slot 1 Firmware Revision: 1.0 00:14:47.184 00:14:47.184 00:14:47.184 Commands Supported and Effects 00:14:47.184 ============================== 00:14:47.184 Admin Commands 00:14:47.184 -------------- 00:14:47.184 Delete I/O Submission Queue (00h): Supported 00:14:47.184 Create I/O Submission Queue (01h): Supported 00:14:47.184 Get Log Page (02h): Supported 00:14:47.184 Delete I/O Completion Queue (04h): Supported 00:14:47.184 Create I/O Completion Queue (05h): Supported 00:14:47.184 Identify (06h): Supported 00:14:47.184 Abort (08h): Supported 00:14:47.184 Set Features (09h): Supported 00:14:47.184 Get Features (0Ah): Supported 00:14:47.184 Asynchronous Event Request (0Ch): Supported 00:14:47.184 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:47.184 Directive Send (19h): Supported 00:14:47.184 Directive Receive (1Ah): Supported 00:14:47.184 Virtualization Management (1Ch): Supported 00:14:47.184 Doorbell Buffer Config (7Ch): Supported 00:14:47.184 Format NVM (80h): Supported LBA-Change 00:14:47.184 I/O Commands 00:14:47.184 ------------ 00:14:47.184 Flush (00h): Supported LBA-Change 00:14:47.184 Write (01h): Supported LBA-Change 00:14:47.184 Read (02h): Supported 00:14:47.184 Compare (05h): Supported 00:14:47.184 Write Zeroes (08h): Supported LBA-Change 00:14:47.184 Dataset Management (09h): Supported LBA-Change 00:14:47.184 Unknown (0Ch): Supported 00:14:47.184 Unknown (12h): Supported 00:14:47.184 Copy (19h): Supported LBA-Change 00:14:47.184 Unknown (1Dh): Supported LBA-Change 00:14:47.184 00:14:47.184 Error Log 00:14:47.184 ========= 00:14:47.184 00:14:47.184 Arbitration 00:14:47.184 =========== 00:14:47.184 Arbitration Burst: no limit 00:14:47.184 00:14:47.184 Power Management 00:14:47.184 ================ 00:14:47.184 Number of Power States: 1 00:14:47.184 Current Power State: Power State #0 00:14:47.184 Power State #0: 00:14:47.184 Max Power: 25.00 W 00:14:47.184 Non-Operational State: Operational 00:14:47.184 Entry Latency: 16 microseconds 00:14:47.184 Exit Latency: 4 microseconds 00:14:47.184 Relative Read Throughput: 0 00:14:47.184 Relative Read Latency: 0 00:14:47.184 Relative Write Throughput: 0 00:14:47.184 Relative Write Latency: 0 00:14:47.184 Idle Power: Not Reported 00:14:47.184 Active Power: Not Reported 00:14:47.184 Non-Operational Permissive Mode: Not Supported 00:14:47.184 00:14:47.184 Health Information 00:14:47.184 ================== 00:14:47.184 Critical Warnings: 00:14:47.184 Available Spare Space: OK 00:14:47.184 Temperature: OK 00:14:47.184 Device Reliability: OK 00:14:47.184 Read Only: No 00:14:47.184 Volatile Memory Backup: OK 00:14:47.184 Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.184 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:47.184 Available Spare: 0% 00:14:47.184 Available Spare Threshold: 0% 00:14:47.184 Life Percentage Used: 0% 00:14:47.184 Data Units Read: 2062 00:14:47.184 Data Units Written: 1850 00:14:47.184 Host Read Commands: 98072 00:14:47.184 Host Write Commands: 96344 00:14:47.184 Controller Busy Time: 0 minutes 00:14:47.184 Power Cycles: 0 00:14:47.184 Power On Hours: 0 hours 00:14:47.185 Unsafe Shutdowns: 0 00:14:47.185 Unrecoverable Media Errors: 0 00:14:47.185 Lifetime Error Log Entries: 0 00:14:47.185 Warning Temperature Time: 0 minutes 00:14:47.185 Critical Temperature Time: 0 minutes 00:14:47.185 00:14:47.185 Number of Queues 00:14:47.185 ================ 00:14:47.185 Number of I/O Submission Queues: 64 00:14:47.185 Number of I/O Completion Queues: 64 00:14:47.185 00:14:47.185 ZNS Specific Controller Data 00:14:47.185 ============================ 00:14:47.185 Zone Append Size Limit: 0 00:14:47.185 00:14:47.185 00:14:47.185 Active Namespaces 00:14:47.185 ================= 00:14:47.185 Namespace ID:1 00:14:47.185 Error Recovery Timeout: Unlimited 00:14:47.185 Command Set Identifier: NVM (00h) 00:14:47.185 Deallocate: Supported 00:14:47.185 Deallocated/Unwritten Error: Supported 00:14:47.185 Deallocated Read Value: All 0x00 00:14:47.185 Deallocate in Write Zeroes: Not Supported 00:14:47.185 Deallocated Guard Field: 0xFFFF 00:14:47.185 Flush: Supported 00:14:47.185 Reservation: Not Supported 00:14:47.185 Namespace Sharing Capabilities: Private 00:14:47.185 Size (in LBAs): 1048576 (4GiB) 00:14:47.185 Capacity (in LBAs): 1048576 (4GiB) 00:14:47.185 Utilization (in LBAs): 1048576 (4GiB) 00:14:47.185 Thin Provisioning: Not Supported 00:14:47.185 Per-NS Atomic Units: No 00:14:47.185 Maximum Single Source Range Length: 128 00:14:47.185 Maximum Copy Length: 128 00:14:47.185 Maximum Source Range Count: 128 00:14:47.185 NGUID/EUI64 Never Reused: No 00:14:47.185 Namespace Write Protected: No 00:14:47.185 Number of LBA Formats: 8 00:14:47.185 Current LBA Format: LBA Format #04 00:14:47.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.185 00:14:47.185 NVM Specific Namespace Data 00:14:47.185 =========================== 00:14:47.185 Logical Block Storage Tag Mask: 0 00:14:47.185 Protection Information Capabilities: 00:14:47.185 16b Guard Protection Information Storage Tag Support: No 00:14:47.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.185 Storage Tag Check Read Support: No 00:14:47.185 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Namespace ID:2 00:14:47.185 Error Recovery Timeout: Unlimited 00:14:47.185 Command Set Identifier: NVM (00h) 00:14:47.185 Deallocate: Supported 00:14:47.185 Deallocated/Unwritten Error: Supported 00:14:47.185 Deallocated Read Value: All 0x00 00:14:47.185 Deallocate in Write Zeroes: Not Supported 00:14:47.185 Deallocated Guard Field: 0xFFFF 00:14:47.185 Flush: Supported 00:14:47.185 Reservation: Not Supported 00:14:47.185 Namespace Sharing Capabilities: Private 00:14:47.185 Size (in LBAs): 1048576 (4GiB) 00:14:47.185 Capacity (in LBAs): 1048576 (4GiB) 00:14:47.185 Utilization (in LBAs): 1048576 (4GiB) 00:14:47.185 Thin Provisioning: Not Supported 00:14:47.185 Per-NS Atomic Units: No 00:14:47.185 Maximum Single Source Range Length: 128 00:14:47.185 Maximum Copy Length: 128 00:14:47.185 Maximum Source Range Count: 128 00:14:47.185 NGUID/EUI64 Never Reused: No 00:14:47.185 Namespace Write Protected: No 00:14:47.185 Number of LBA Formats: 8 00:14:47.185 Current LBA Format: LBA Format #04 00:14:47.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.185 00:14:47.185 NVM Specific Namespace Data 00:14:47.185 =========================== 00:14:47.185 Logical Block Storage Tag Mask: 0 00:14:47.185 Protection Information Capabilities: 00:14:47.185 16b Guard Protection Information Storage Tag Support: No 00:14:47.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.185 Storage Tag Check Read Support: No 00:14:47.185 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.185 Namespace ID:3 00:14:47.185 Error Recovery Timeout: Unlimited 00:14:47.185 Command Set Identifier: NVM (00h) 00:14:47.185 Deallocate: Supported 00:14:47.185 Deallocated/Unwritten Error: Supported 00:14:47.185 Deallocated Read Value: All 0x00 00:14:47.185 Deallocate in Write Zeroes: Not Supported 00:14:47.185 Deallocated Guard Field: 0xFFFF 00:14:47.185 Flush: Supported 00:14:47.185 Reservation: Not Supported 00:14:47.185 Namespace Sharing Capabilities: Private 00:14:47.185 Size (in LBAs): 1048576 (4GiB) 00:14:47.454 Capacity (in LBAs): 1048576 (4GiB) 00:14:47.454 Utilization (in LBAs): 1048576 (4GiB) 00:14:47.454 Thin Provisioning: Not Supported 00:14:47.454 Per-NS Atomic Units: No 00:14:47.454 Maximum Single Source Range Length: 128 00:14:47.454 Maximum Copy Length: 128 00:14:47.454 Maximum Source Range Count: 128 00:14:47.454 NGUID/EUI64 Never Reused: No 00:14:47.454 Namespace Write Protected: No 00:14:47.454 Number of LBA Formats: 8 00:14:47.454 Current LBA Format: LBA Format #04 00:14:47.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.454 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.454 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.454 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.454 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.454 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.454 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.454 00:14:47.454 NVM Specific Namespace Data 00:14:47.454 =========================== 00:14:47.454 Logical Block Storage Tag Mask: 0 00:14:47.454 Protection Information Capabilities: 00:14:47.454 16b Guard Protection Information Storage Tag Support: No 00:14:47.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.454 Storage Tag Check Read Support: No 00:14:47.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.454 13:34:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:47.454 13:34:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:14:47.721 ===================================================== 00:14:47.721 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:47.721 ===================================================== 00:14:47.721 Controller Capabilities/Features 00:14:47.721 ================================ 00:14:47.721 Vendor ID: 1b36 00:14:47.721 Subsystem Vendor ID: 1af4 00:14:47.721 Serial Number: 12340 00:14:47.721 Model Number: QEMU NVMe Ctrl 00:14:47.721 Firmware Version: 8.0.0 00:14:47.722 Recommended Arb Burst: 6 00:14:47.722 IEEE OUI Identifier: 00 54 52 00:14:47.722 Multi-path I/O 00:14:47.722 May have multiple subsystem ports: No 00:14:47.722 May have multiple controllers: No 00:14:47.722 Associated with SR-IOV VF: No 00:14:47.722 Max Data Transfer Size: 524288 00:14:47.722 Max Number of Namespaces: 256 00:14:47.722 Max Number of I/O Queues: 64 00:14:47.722 NVMe Specification Version (VS): 1.4 00:14:47.722 NVMe Specification Version (Identify): 1.4 00:14:47.722 Maximum Queue Entries: 2048 00:14:47.722 Contiguous Queues Required: Yes 00:14:47.722 Arbitration Mechanisms Supported 00:14:47.722 Weighted Round Robin: Not Supported 00:14:47.722 Vendor Specific: Not Supported 00:14:47.722 Reset Timeout: 7500 ms 00:14:47.722 Doorbell Stride: 4 bytes 00:14:47.722 NVM Subsystem Reset: Not Supported 00:14:47.722 Command Sets Supported 00:14:47.722 NVM Command Set: Supported 00:14:47.722 Boot Partition: Not Supported 00:14:47.722 Memory Page Size Minimum: 4096 bytes 00:14:47.722 Memory Page Size Maximum: 65536 bytes 00:14:47.722 Persistent Memory Region: Not Supported 00:14:47.722 Optional Asynchronous Events Supported 00:14:47.722 Namespace Attribute Notices: Supported 00:14:47.722 Firmware Activation Notices: Not Supported 00:14:47.722 ANA Change Notices: Not Supported 00:14:47.722 PLE Aggregate Log Change Notices: Not Supported 00:14:47.722 LBA Status Info Alert Notices: Not Supported 00:14:47.722 EGE Aggregate Log Change Notices: Not Supported 00:14:47.722 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.722 Zone Descriptor Change Notices: Not Supported 00:14:47.722 Discovery Log Change Notices: Not Supported 00:14:47.722 Controller Attributes 00:14:47.722 128-bit Host Identifier: Not Supported 00:14:47.722 Non-Operational Permissive Mode: Not Supported 00:14:47.722 NVM Sets: Not Supported 00:14:47.722 Read Recovery Levels: Not Supported 00:14:47.722 Endurance Groups: Not Supported 00:14:47.722 Predictable Latency Mode: Not Supported 00:14:47.722 Traffic Based Keep ALive: Not Supported 00:14:47.722 Namespace Granularity: Not Supported 00:14:47.722 SQ Associations: Not Supported 00:14:47.722 UUID List: Not Supported 00:14:47.722 Multi-Domain Subsystem: Not Supported 00:14:47.722 Fixed Capacity Management: Not Supported 00:14:47.722 Variable Capacity Management: Not Supported 00:14:47.722 Delete Endurance Group: Not Supported 00:14:47.722 Delete NVM Set: Not Supported 00:14:47.722 Extended LBA Formats Supported: Supported 00:14:47.722 Flexible Data Placement Supported: Not Supported 00:14:47.722 00:14:47.722 Controller Memory Buffer Support 00:14:47.722 ================================ 00:14:47.722 Supported: No 00:14:47.722 00:14:47.722 Persistent Memory Region Support 00:14:47.722 ================================ 00:14:47.722 Supported: No 00:14:47.722 00:14:47.722 Admin Command Set Attributes 00:14:47.722 ============================ 00:14:47.722 Security Send/Receive: Not Supported 00:14:47.722 Format NVM: Supported 00:14:47.722 Firmware Activate/Download: Not Supported 00:14:47.722 Namespace Management: Supported 00:14:47.722 Device Self-Test: Not Supported 00:14:47.722 Directives: Supported 00:14:47.722 NVMe-MI: Not Supported 00:14:47.722 Virtualization Management: Not Supported 00:14:47.722 Doorbell Buffer Config: Supported 00:14:47.722 Get LBA Status Capability: Not Supported 00:14:47.722 Command & Feature Lockdown Capability: Not Supported 00:14:47.722 Abort Command Limit: 4 00:14:47.722 Async Event Request Limit: 4 00:14:47.722 Number of Firmware Slots: N/A 00:14:47.722 Firmware Slot 1 Read-Only: N/A 00:14:47.722 Firmware Activation Without Reset: N/A 00:14:47.722 Multiple Update Detection Support: N/A 00:14:47.722 Firmware Update Granularity: No Information Provided 00:14:47.722 Per-Namespace SMART Log: Yes 00:14:47.722 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.722 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:47.722 Command Effects Log Page: Supported 00:14:47.722 Get Log Page Extended Data: Supported 00:14:47.722 Telemetry Log Pages: Not Supported 00:14:47.722 Persistent Event Log Pages: Not Supported 00:14:47.722 Supported Log Pages Log Page: May Support 00:14:47.722 Commands Supported & Effects Log Page: Not Supported 00:14:47.722 Feature Identifiers & Effects Log Page:May Support 00:14:47.722 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.722 Data Area 4 for Telemetry Log: Not Supported 00:14:47.722 Error Log Page Entries Supported: 1 00:14:47.722 Keep Alive: Not Supported 00:14:47.722 00:14:47.722 NVM Command Set Attributes 00:14:47.722 ========================== 00:14:47.722 Submission Queue Entry Size 00:14:47.722 Max: 64 00:14:47.722 Min: 64 00:14:47.722 Completion Queue Entry Size 00:14:47.722 Max: 16 00:14:47.722 Min: 16 00:14:47.722 Number of Namespaces: 256 00:14:47.722 Compare Command: Supported 00:14:47.722 Write Uncorrectable Command: Not Supported 00:14:47.722 Dataset Management Command: Supported 00:14:47.722 Write Zeroes Command: Supported 00:14:47.722 Set Features Save Field: Supported 00:14:47.722 Reservations: Not Supported 00:14:47.722 Timestamp: Supported 00:14:47.722 Copy: Supported 00:14:47.722 Volatile Write Cache: Present 00:14:47.722 Atomic Write Unit (Normal): 1 00:14:47.722 Atomic Write Unit (PFail): 1 00:14:47.722 Atomic Compare & Write Unit: 1 00:14:47.722 Fused Compare & Write: Not Supported 00:14:47.722 Scatter-Gather List 00:14:47.722 SGL Command Set: Supported 00:14:47.722 SGL Keyed: Not Supported 00:14:47.722 SGL Bit Bucket Descriptor: Not Supported 00:14:47.722 SGL Metadata Pointer: Not Supported 00:14:47.722 Oversized SGL: Not Supported 00:14:47.722 SGL Metadata Address: Not Supported 00:14:47.722 SGL Offset: Not Supported 00:14:47.722 Transport SGL Data Block: Not Supported 00:14:47.722 Replay Protected Memory Block: Not Supported 00:14:47.722 00:14:47.722 Firmware Slot Information 00:14:47.722 ========================= 00:14:47.722 Active slot: 1 00:14:47.722 Slot 1 Firmware Revision: 1.0 00:14:47.722 00:14:47.722 00:14:47.722 Commands Supported and Effects 00:14:47.722 ============================== 00:14:47.722 Admin Commands 00:14:47.722 -------------- 00:14:47.722 Delete I/O Submission Queue (00h): Supported 00:14:47.722 Create I/O Submission Queue (01h): Supported 00:14:47.722 Get Log Page (02h): Supported 00:14:47.722 Delete I/O Completion Queue (04h): Supported 00:14:47.722 Create I/O Completion Queue (05h): Supported 00:14:47.722 Identify (06h): Supported 00:14:47.722 Abort (08h): Supported 00:14:47.722 Set Features (09h): Supported 00:14:47.722 Get Features (0Ah): Supported 00:14:47.722 Asynchronous Event Request (0Ch): Supported 00:14:47.722 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:47.722 Directive Send (19h): Supported 00:14:47.722 Directive Receive (1Ah): Supported 00:14:47.722 Virtualization Management (1Ch): Supported 00:14:47.722 Doorbell Buffer Config (7Ch): Supported 00:14:47.722 Format NVM (80h): Supported LBA-Change 00:14:47.722 I/O Commands 00:14:47.722 ------------ 00:14:47.722 Flush (00h): Supported LBA-Change 00:14:47.722 Write (01h): Supported LBA-Change 00:14:47.722 Read (02h): Supported 00:14:47.722 Compare (05h): Supported 00:14:47.722 Write Zeroes (08h): Supported LBA-Change 00:14:47.722 Dataset Management (09h): Supported LBA-Change 00:14:47.722 Unknown (0Ch): Supported 00:14:47.722 Unknown (12h): Supported 00:14:47.722 Copy (19h): Supported LBA-Change 00:14:47.722 Unknown (1Dh): Supported LBA-Change 00:14:47.722 00:14:47.722 Error Log 00:14:47.722 ========= 00:14:47.722 00:14:47.722 Arbitration 00:14:47.722 =========== 00:14:47.722 Arbitration Burst: no limit 00:14:47.722 00:14:47.722 Power Management 00:14:47.722 ================ 00:14:47.722 Number of Power States: 1 00:14:47.722 Current Power State: Power State #0 00:14:47.722 Power State #0: 00:14:47.722 Max Power: 25.00 W 00:14:47.722 Non-Operational State: Operational 00:14:47.722 Entry Latency: 16 microseconds 00:14:47.722 Exit Latency: 4 microseconds 00:14:47.722 Relative Read Throughput: 0 00:14:47.722 Relative Read Latency: 0 00:14:47.722 Relative Write Throughput: 0 00:14:47.722 Relative Write Latency: 0 00:14:47.722 Idle Power: Not Reported 00:14:47.722 Active Power: Not Reported 00:14:47.722 Non-Operational Permissive Mode: Not Supported 00:14:47.722 00:14:47.722 Health Information 00:14:47.722 ================== 00:14:47.722 Critical Warnings: 00:14:47.722 Available Spare Space: OK 00:14:47.722 Temperature: OK 00:14:47.722 Device Reliability: OK 00:14:47.723 Read Only: No 00:14:47.723 Volatile Memory Backup: OK 00:14:47.723 Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.723 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:47.723 Available Spare: 0% 00:14:47.723 Available Spare Threshold: 0% 00:14:47.723 Life Percentage Used: 0% 00:14:47.723 Data Units Read: 649 00:14:47.723 Data Units Written: 577 00:14:47.723 Host Read Commands: 32150 00:14:47.723 Host Write Commands: 31936 00:14:47.723 Controller Busy Time: 0 minutes 00:14:47.723 Power Cycles: 0 00:14:47.723 Power On Hours: 0 hours 00:14:47.723 Unsafe Shutdowns: 0 00:14:47.723 Unrecoverable Media Errors: 0 00:14:47.723 Lifetime Error Log Entries: 0 00:14:47.723 Warning Temperature Time: 0 minutes 00:14:47.723 Critical Temperature Time: 0 minutes 00:14:47.723 00:14:47.723 Number of Queues 00:14:47.723 ================ 00:14:47.723 Number of I/O Submission Queues: 64 00:14:47.723 Number of I/O Completion Queues: 64 00:14:47.723 00:14:47.723 ZNS Specific Controller Data 00:14:47.723 ============================ 00:14:47.723 Zone Append Size Limit: 0 00:14:47.723 00:14:47.723 00:14:47.723 Active Namespaces 00:14:47.723 ================= 00:14:47.723 Namespace ID:1 00:14:47.723 Error Recovery Timeout: Unlimited 00:14:47.723 Command Set Identifier: NVM (00h) 00:14:47.723 Deallocate: Supported 00:14:47.723 Deallocated/Unwritten Error: Supported 00:14:47.723 Deallocated Read Value: All 0x00 00:14:47.723 Deallocate in Write Zeroes: Not Supported 00:14:47.723 Deallocated Guard Field: 0xFFFF 00:14:47.723 Flush: Supported 00:14:47.723 Reservation: Not Supported 00:14:47.723 Metadata Transferred as: Separate Metadata Buffer 00:14:47.723 Namespace Sharing Capabilities: Private 00:14:47.723 Size (in LBAs): 1548666 (5GiB) 00:14:47.723 Capacity (in LBAs): 1548666 (5GiB) 00:14:47.723 Utilization (in LBAs): 1548666 (5GiB) 00:14:47.723 Thin Provisioning: Not Supported 00:14:47.723 Per-NS Atomic Units: No 00:14:47.723 Maximum Single Source Range Length: 128 00:14:47.723 Maximum Copy Length: 128 00:14:47.723 Maximum Source Range Count: 128 00:14:47.723 NGUID/EUI64 Never Reused: No 00:14:47.723 Namespace Write Protected: No 00:14:47.723 Number of LBA Formats: 8 00:14:47.723 Current LBA Format: LBA Format #07 00:14:47.723 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.723 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:47.723 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:47.723 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:47.723 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:47.723 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:47.723 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:47.723 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:47.723 00:14:47.723 NVM Specific Namespace Data 00:14:47.723 =========================== 00:14:47.723 Logical Block Storage Tag Mask: 0 00:14:47.723 Protection Information Capabilities: 00:14:47.723 16b Guard Protection Information Storage Tag Support: No 00:14:47.723 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:47.723 Storage Tag Check Read Support: No 00:14:47.723 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:47.723 13:34:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:47.723 13:34:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:14:48.288 ===================================================== 00:14:48.288 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:48.288 ===================================================== 00:14:48.288 Controller Capabilities/Features 00:14:48.288 ================================ 00:14:48.288 Vendor ID: 1b36 00:14:48.288 Subsystem Vendor ID: 1af4 00:14:48.288 Serial Number: 12341 00:14:48.288 Model Number: QEMU NVMe Ctrl 00:14:48.288 Firmware Version: 8.0.0 00:14:48.288 Recommended Arb Burst: 6 00:14:48.288 IEEE OUI Identifier: 00 54 52 00:14:48.288 Multi-path I/O 00:14:48.288 May have multiple subsystem ports: No 00:14:48.288 May have multiple controllers: No 00:14:48.288 Associated with SR-IOV VF: No 00:14:48.288 Max Data Transfer Size: 524288 00:14:48.288 Max Number of Namespaces: 256 00:14:48.288 Max Number of I/O Queues: 64 00:14:48.288 NVMe Specification Version (VS): 1.4 00:14:48.288 NVMe Specification Version (Identify): 1.4 00:14:48.288 Maximum Queue Entries: 2048 00:14:48.288 Contiguous Queues Required: Yes 00:14:48.288 Arbitration Mechanisms Supported 00:14:48.288 Weighted Round Robin: Not Supported 00:14:48.288 Vendor Specific: Not Supported 00:14:48.288 Reset Timeout: 7500 ms 00:14:48.288 Doorbell Stride: 4 bytes 00:14:48.288 NVM Subsystem Reset: Not Supported 00:14:48.288 Command Sets Supported 00:14:48.288 NVM Command Set: Supported 00:14:48.288 Boot Partition: Not Supported 00:14:48.288 Memory Page Size Minimum: 4096 bytes 00:14:48.288 Memory Page Size Maximum: 65536 bytes 00:14:48.288 Persistent Memory Region: Not Supported 00:14:48.288 Optional Asynchronous Events Supported 00:14:48.288 Namespace Attribute Notices: Supported 00:14:48.288 Firmware Activation Notices: Not Supported 00:14:48.288 ANA Change Notices: Not Supported 00:14:48.288 PLE Aggregate Log Change Notices: Not Supported 00:14:48.288 LBA Status Info Alert Notices: Not Supported 00:14:48.288 EGE Aggregate Log Change Notices: Not Supported 00:14:48.288 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.288 Zone Descriptor Change Notices: Not Supported 00:14:48.288 Discovery Log Change Notices: Not Supported 00:14:48.288 Controller Attributes 00:14:48.288 128-bit Host Identifier: Not Supported 00:14:48.288 Non-Operational Permissive Mode: Not Supported 00:14:48.288 NVM Sets: Not Supported 00:14:48.288 Read Recovery Levels: Not Supported 00:14:48.289 Endurance Groups: Not Supported 00:14:48.289 Predictable Latency Mode: Not Supported 00:14:48.289 Traffic Based Keep ALive: Not Supported 00:14:48.289 Namespace Granularity: Not Supported 00:14:48.289 SQ Associations: Not Supported 00:14:48.289 UUID List: Not Supported 00:14:48.289 Multi-Domain Subsystem: Not Supported 00:14:48.289 Fixed Capacity Management: Not Supported 00:14:48.289 Variable Capacity Management: Not Supported 00:14:48.289 Delete Endurance Group: Not Supported 00:14:48.289 Delete NVM Set: Not Supported 00:14:48.289 Extended LBA Formats Supported: Supported 00:14:48.289 Flexible Data Placement Supported: Not Supported 00:14:48.289 00:14:48.289 Controller Memory Buffer Support 00:14:48.289 ================================ 00:14:48.289 Supported: No 00:14:48.289 00:14:48.289 Persistent Memory Region Support 00:14:48.289 ================================ 00:14:48.289 Supported: No 00:14:48.289 00:14:48.289 Admin Command Set Attributes 00:14:48.289 ============================ 00:14:48.289 Security Send/Receive: Not Supported 00:14:48.289 Format NVM: Supported 00:14:48.289 Firmware Activate/Download: Not Supported 00:14:48.289 Namespace Management: Supported 00:14:48.289 Device Self-Test: Not Supported 00:14:48.289 Directives: Supported 00:14:48.289 NVMe-MI: Not Supported 00:14:48.289 Virtualization Management: Not Supported 00:14:48.289 Doorbell Buffer Config: Supported 00:14:48.289 Get LBA Status Capability: Not Supported 00:14:48.289 Command & Feature Lockdown Capability: Not Supported 00:14:48.289 Abort Command Limit: 4 00:14:48.289 Async Event Request Limit: 4 00:14:48.289 Number of Firmware Slots: N/A 00:14:48.289 Firmware Slot 1 Read-Only: N/A 00:14:48.289 Firmware Activation Without Reset: N/A 00:14:48.289 Multiple Update Detection Support: N/A 00:14:48.289 Firmware Update Granularity: No Information Provided 00:14:48.289 Per-Namespace SMART Log: Yes 00:14:48.289 Asymmetric Namespace Access Log Page: Not Supported 00:14:48.289 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:48.289 Command Effects Log Page: Supported 00:14:48.289 Get Log Page Extended Data: Supported 00:14:48.289 Telemetry Log Pages: Not Supported 00:14:48.289 Persistent Event Log Pages: Not Supported 00:14:48.289 Supported Log Pages Log Page: May Support 00:14:48.289 Commands Supported & Effects Log Page: Not Supported 00:14:48.289 Feature Identifiers & Effects Log Page:May Support 00:14:48.289 NVMe-MI Commands & Effects Log Page: May Support 00:14:48.289 Data Area 4 for Telemetry Log: Not Supported 00:14:48.289 Error Log Page Entries Supported: 1 00:14:48.289 Keep Alive: Not Supported 00:14:48.289 00:14:48.289 NVM Command Set Attributes 00:14:48.289 ========================== 00:14:48.289 Submission Queue Entry Size 00:14:48.289 Max: 64 00:14:48.289 Min: 64 00:14:48.289 Completion Queue Entry Size 00:14:48.289 Max: 16 00:14:48.289 Min: 16 00:14:48.289 Number of Namespaces: 256 00:14:48.289 Compare Command: Supported 00:14:48.289 Write Uncorrectable Command: Not Supported 00:14:48.289 Dataset Management Command: Supported 00:14:48.289 Write Zeroes Command: Supported 00:14:48.289 Set Features Save Field: Supported 00:14:48.289 Reservations: Not Supported 00:14:48.289 Timestamp: Supported 00:14:48.289 Copy: Supported 00:14:48.289 Volatile Write Cache: Present 00:14:48.289 Atomic Write Unit (Normal): 1 00:14:48.289 Atomic Write Unit (PFail): 1 00:14:48.289 Atomic Compare & Write Unit: 1 00:14:48.289 Fused Compare & Write: Not Supported 00:14:48.289 Scatter-Gather List 00:14:48.289 SGL Command Set: Supported 00:14:48.289 SGL Keyed: Not Supported 00:14:48.289 SGL Bit Bucket Descriptor: Not Supported 00:14:48.289 SGL Metadata Pointer: Not Supported 00:14:48.289 Oversized SGL: Not Supported 00:14:48.289 SGL Metadata Address: Not Supported 00:14:48.289 SGL Offset: Not Supported 00:14:48.289 Transport SGL Data Block: Not Supported 00:14:48.289 Replay Protected Memory Block: Not Supported 00:14:48.289 00:14:48.289 Firmware Slot Information 00:14:48.289 ========================= 00:14:48.289 Active slot: 1 00:14:48.289 Slot 1 Firmware Revision: 1.0 00:14:48.289 00:14:48.289 00:14:48.289 Commands Supported and Effects 00:14:48.289 ============================== 00:14:48.289 Admin Commands 00:14:48.289 -------------- 00:14:48.289 Delete I/O Submission Queue (00h): Supported 00:14:48.289 Create I/O Submission Queue (01h): Supported 00:14:48.289 Get Log Page (02h): Supported 00:14:48.289 Delete I/O Completion Queue (04h): Supported 00:14:48.289 Create I/O Completion Queue (05h): Supported 00:14:48.289 Identify (06h): Supported 00:14:48.289 Abort (08h): Supported 00:14:48.289 Set Features (09h): Supported 00:14:48.289 Get Features (0Ah): Supported 00:14:48.289 Asynchronous Event Request (0Ch): Supported 00:14:48.289 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:48.289 Directive Send (19h): Supported 00:14:48.289 Directive Receive (1Ah): Supported 00:14:48.289 Virtualization Management (1Ch): Supported 00:14:48.289 Doorbell Buffer Config (7Ch): Supported 00:14:48.289 Format NVM (80h): Supported LBA-Change 00:14:48.289 I/O Commands 00:14:48.289 ------------ 00:14:48.289 Flush (00h): Supported LBA-Change 00:14:48.289 Write (01h): Supported LBA-Change 00:14:48.289 Read (02h): Supported 00:14:48.289 Compare (05h): Supported 00:14:48.289 Write Zeroes (08h): Supported LBA-Change 00:14:48.289 Dataset Management (09h): Supported LBA-Change 00:14:48.289 Unknown (0Ch): Supported 00:14:48.289 Unknown (12h): Supported 00:14:48.289 Copy (19h): Supported LBA-Change 00:14:48.289 Unknown (1Dh): Supported LBA-Change 00:14:48.289 00:14:48.289 Error Log 00:14:48.289 ========= 00:14:48.289 00:14:48.289 Arbitration 00:14:48.289 =========== 00:14:48.289 Arbitration Burst: no limit 00:14:48.289 00:14:48.289 Power Management 00:14:48.289 ================ 00:14:48.289 Number of Power States: 1 00:14:48.289 Current Power State: Power State #0 00:14:48.289 Power State #0: 00:14:48.289 Max Power: 25.00 W 00:14:48.289 Non-Operational State: Operational 00:14:48.289 Entry Latency: 16 microseconds 00:14:48.289 Exit Latency: 4 microseconds 00:14:48.289 Relative Read Throughput: 0 00:14:48.289 Relative Read Latency: 0 00:14:48.289 Relative Write Throughput: 0 00:14:48.289 Relative Write Latency: 0 00:14:48.289 Idle Power: Not Reported 00:14:48.289 Active Power: Not Reported 00:14:48.289 Non-Operational Permissive Mode: Not Supported 00:14:48.289 00:14:48.289 Health Information 00:14:48.289 ================== 00:14:48.289 Critical Warnings: 00:14:48.289 Available Spare Space: OK 00:14:48.289 Temperature: OK 00:14:48.289 Device Reliability: OK 00:14:48.289 Read Only: No 00:14:48.289 Volatile Memory Backup: OK 00:14:48.289 Current Temperature: 323 Kelvin (50 Celsius) 00:14:48.289 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:48.289 Available Spare: 0% 00:14:48.289 Available Spare Threshold: 0% 00:14:48.289 Life Percentage Used: 0% 00:14:48.289 Data Units Read: 998 00:14:48.289 Data Units Written: 865 00:14:48.289 Host Read Commands: 47881 00:14:48.289 Host Write Commands: 46665 00:14:48.289 Controller Busy Time: 0 minutes 00:14:48.289 Power Cycles: 0 00:14:48.289 Power On Hours: 0 hours 00:14:48.289 Unsafe Shutdowns: 0 00:14:48.289 Unrecoverable Media Errors: 0 00:14:48.289 Lifetime Error Log Entries: 0 00:14:48.289 Warning Temperature Time: 0 minutes 00:14:48.289 Critical Temperature Time: 0 minutes 00:14:48.289 00:14:48.289 Number of Queues 00:14:48.290 ================ 00:14:48.290 Number of I/O Submission Queues: 64 00:14:48.290 Number of I/O Completion Queues: 64 00:14:48.290 00:14:48.290 ZNS Specific Controller Data 00:14:48.290 ============================ 00:14:48.290 Zone Append Size Limit: 0 00:14:48.290 00:14:48.290 00:14:48.290 Active Namespaces 00:14:48.290 ================= 00:14:48.290 Namespace ID:1 00:14:48.290 Error Recovery Timeout: Unlimited 00:14:48.290 Command Set Identifier: NVM (00h) 00:14:48.290 Deallocate: Supported 00:14:48.290 Deallocated/Unwritten Error: Supported 00:14:48.290 Deallocated Read Value: All 0x00 00:14:48.290 Deallocate in Write Zeroes: Not Supported 00:14:48.290 Deallocated Guard Field: 0xFFFF 00:14:48.290 Flush: Supported 00:14:48.290 Reservation: Not Supported 00:14:48.290 Namespace Sharing Capabilities: Private 00:14:48.290 Size (in LBAs): 1310720 (5GiB) 00:14:48.290 Capacity (in LBAs): 1310720 (5GiB) 00:14:48.290 Utilization (in LBAs): 1310720 (5GiB) 00:14:48.290 Thin Provisioning: Not Supported 00:14:48.290 Per-NS Atomic Units: No 00:14:48.290 Maximum Single Source Range Length: 128 00:14:48.290 Maximum Copy Length: 128 00:14:48.290 Maximum Source Range Count: 128 00:14:48.290 NGUID/EUI64 Never Reused: No 00:14:48.290 Namespace Write Protected: No 00:14:48.290 Number of LBA Formats: 8 00:14:48.290 Current LBA Format: LBA Format #04 00:14:48.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.290 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:48.290 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:48.290 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:48.290 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:48.290 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:48.290 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:48.290 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:48.290 00:14:48.290 NVM Specific Namespace Data 00:14:48.290 =========================== 00:14:48.290 Logical Block Storage Tag Mask: 0 00:14:48.290 Protection Information Capabilities: 00:14:48.290 16b Guard Protection Information Storage Tag Support: No 00:14:48.290 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:48.290 Storage Tag Check Read Support: No 00:14:48.290 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.290 13:34:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:48.290 13:34:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:14:48.549 ===================================================== 00:14:48.549 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:48.549 ===================================================== 00:14:48.549 Controller Capabilities/Features 00:14:48.549 ================================ 00:14:48.549 Vendor ID: 1b36 00:14:48.549 Subsystem Vendor ID: 1af4 00:14:48.549 Serial Number: 12342 00:14:48.549 Model Number: QEMU NVMe Ctrl 00:14:48.549 Firmware Version: 8.0.0 00:14:48.549 Recommended Arb Burst: 6 00:14:48.549 IEEE OUI Identifier: 00 54 52 00:14:48.549 Multi-path I/O 00:14:48.549 May have multiple subsystem ports: No 00:14:48.549 May have multiple controllers: No 00:14:48.549 Associated with SR-IOV VF: No 00:14:48.549 Max Data Transfer Size: 524288 00:14:48.549 Max Number of Namespaces: 256 00:14:48.549 Max Number of I/O Queues: 64 00:14:48.549 NVMe Specification Version (VS): 1.4 00:14:48.549 NVMe Specification Version (Identify): 1.4 00:14:48.549 Maximum Queue Entries: 2048 00:14:48.549 Contiguous Queues Required: Yes 00:14:48.549 Arbitration Mechanisms Supported 00:14:48.549 Weighted Round Robin: Not Supported 00:14:48.549 Vendor Specific: Not Supported 00:14:48.549 Reset Timeout: 7500 ms 00:14:48.549 Doorbell Stride: 4 bytes 00:14:48.549 NVM Subsystem Reset: Not Supported 00:14:48.549 Command Sets Supported 00:14:48.549 NVM Command Set: Supported 00:14:48.549 Boot Partition: Not Supported 00:14:48.549 Memory Page Size Minimum: 4096 bytes 00:14:48.549 Memory Page Size Maximum: 65536 bytes 00:14:48.549 Persistent Memory Region: Not Supported 00:14:48.549 Optional Asynchronous Events Supported 00:14:48.549 Namespace Attribute Notices: Supported 00:14:48.549 Firmware Activation Notices: Not Supported 00:14:48.549 ANA Change Notices: Not Supported 00:14:48.549 PLE Aggregate Log Change Notices: Not Supported 00:14:48.549 LBA Status Info Alert Notices: Not Supported 00:14:48.549 EGE Aggregate Log Change Notices: Not Supported 00:14:48.549 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.549 Zone Descriptor Change Notices: Not Supported 00:14:48.549 Discovery Log Change Notices: Not Supported 00:14:48.549 Controller Attributes 00:14:48.549 128-bit Host Identifier: Not Supported 00:14:48.549 Non-Operational Permissive Mode: Not Supported 00:14:48.549 NVM Sets: Not Supported 00:14:48.549 Read Recovery Levels: Not Supported 00:14:48.549 Endurance Groups: Not Supported 00:14:48.549 Predictable Latency Mode: Not Supported 00:14:48.549 Traffic Based Keep ALive: Not Supported 00:14:48.549 Namespace Granularity: Not Supported 00:14:48.549 SQ Associations: Not Supported 00:14:48.549 UUID List: Not Supported 00:14:48.549 Multi-Domain Subsystem: Not Supported 00:14:48.549 Fixed Capacity Management: Not Supported 00:14:48.549 Variable Capacity Management: Not Supported 00:14:48.549 Delete Endurance Group: Not Supported 00:14:48.549 Delete NVM Set: Not Supported 00:14:48.549 Extended LBA Formats Supported: Supported 00:14:48.549 Flexible Data Placement Supported: Not Supported 00:14:48.549 00:14:48.549 Controller Memory Buffer Support 00:14:48.549 ================================ 00:14:48.549 Supported: No 00:14:48.549 00:14:48.549 Persistent Memory Region Support 00:14:48.549 ================================ 00:14:48.549 Supported: No 00:14:48.549 00:14:48.549 Admin Command Set Attributes 00:14:48.549 ============================ 00:14:48.549 Security Send/Receive: Not Supported 00:14:48.549 Format NVM: Supported 00:14:48.549 Firmware Activate/Download: Not Supported 00:14:48.549 Namespace Management: Supported 00:14:48.549 Device Self-Test: Not Supported 00:14:48.549 Directives: Supported 00:14:48.549 NVMe-MI: Not Supported 00:14:48.549 Virtualization Management: Not Supported 00:14:48.549 Doorbell Buffer Config: Supported 00:14:48.549 Get LBA Status Capability: Not Supported 00:14:48.549 Command & Feature Lockdown Capability: Not Supported 00:14:48.549 Abort Command Limit: 4 00:14:48.549 Async Event Request Limit: 4 00:14:48.549 Number of Firmware Slots: N/A 00:14:48.549 Firmware Slot 1 Read-Only: N/A 00:14:48.549 Firmware Activation Without Reset: N/A 00:14:48.549 Multiple Update Detection Support: N/A 00:14:48.549 Firmware Update Granularity: No Information Provided 00:14:48.549 Per-Namespace SMART Log: Yes 00:14:48.549 Asymmetric Namespace Access Log Page: Not Supported 00:14:48.549 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:48.549 Command Effects Log Page: Supported 00:14:48.549 Get Log Page Extended Data: Supported 00:14:48.549 Telemetry Log Pages: Not Supported 00:14:48.549 Persistent Event Log Pages: Not Supported 00:14:48.549 Supported Log Pages Log Page: May Support 00:14:48.549 Commands Supported & Effects Log Page: Not Supported 00:14:48.549 Feature Identifiers & Effects Log Page:May Support 00:14:48.549 NVMe-MI Commands & Effects Log Page: May Support 00:14:48.549 Data Area 4 for Telemetry Log: Not Supported 00:14:48.549 Error Log Page Entries Supported: 1 00:14:48.549 Keep Alive: Not Supported 00:14:48.549 00:14:48.549 NVM Command Set Attributes 00:14:48.549 ========================== 00:14:48.549 Submission Queue Entry Size 00:14:48.549 Max: 64 00:14:48.549 Min: 64 00:14:48.549 Completion Queue Entry Size 00:14:48.549 Max: 16 00:14:48.549 Min: 16 00:14:48.549 Number of Namespaces: 256 00:14:48.549 Compare Command: Supported 00:14:48.549 Write Uncorrectable Command: Not Supported 00:14:48.549 Dataset Management Command: Supported 00:14:48.549 Write Zeroes Command: Supported 00:14:48.549 Set Features Save Field: Supported 00:14:48.549 Reservations: Not Supported 00:14:48.549 Timestamp: Supported 00:14:48.549 Copy: Supported 00:14:48.549 Volatile Write Cache: Present 00:14:48.549 Atomic Write Unit (Normal): 1 00:14:48.549 Atomic Write Unit (PFail): 1 00:14:48.549 Atomic Compare & Write Unit: 1 00:14:48.549 Fused Compare & Write: Not Supported 00:14:48.549 Scatter-Gather List 00:14:48.549 SGL Command Set: Supported 00:14:48.549 SGL Keyed: Not Supported 00:14:48.549 SGL Bit Bucket Descriptor: Not Supported 00:14:48.549 SGL Metadata Pointer: Not Supported 00:14:48.550 Oversized SGL: Not Supported 00:14:48.550 SGL Metadata Address: Not Supported 00:14:48.550 SGL Offset: Not Supported 00:14:48.550 Transport SGL Data Block: Not Supported 00:14:48.550 Replay Protected Memory Block: Not Supported 00:14:48.550 00:14:48.550 Firmware Slot Information 00:14:48.550 ========================= 00:14:48.550 Active slot: 1 00:14:48.550 Slot 1 Firmware Revision: 1.0 00:14:48.550 00:14:48.550 00:14:48.550 Commands Supported and Effects 00:14:48.550 ============================== 00:14:48.550 Admin Commands 00:14:48.550 -------------- 00:14:48.550 Delete I/O Submission Queue (00h): Supported 00:14:48.550 Create I/O Submission Queue (01h): Supported 00:14:48.550 Get Log Page (02h): Supported 00:14:48.550 Delete I/O Completion Queue (04h): Supported 00:14:48.550 Create I/O Completion Queue (05h): Supported 00:14:48.550 Identify (06h): Supported 00:14:48.550 Abort (08h): Supported 00:14:48.550 Set Features (09h): Supported 00:14:48.550 Get Features (0Ah): Supported 00:14:48.550 Asynchronous Event Request (0Ch): Supported 00:14:48.550 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:48.550 Directive Send (19h): Supported 00:14:48.550 Directive Receive (1Ah): Supported 00:14:48.550 Virtualization Management (1Ch): Supported 00:14:48.550 Doorbell Buffer Config (7Ch): Supported 00:14:48.550 Format NVM (80h): Supported LBA-Change 00:14:48.550 I/O Commands 00:14:48.550 ------------ 00:14:48.550 Flush (00h): Supported LBA-Change 00:14:48.550 Write (01h): Supported LBA-Change 00:14:48.550 Read (02h): Supported 00:14:48.550 Compare (05h): Supported 00:14:48.550 Write Zeroes (08h): Supported LBA-Change 00:14:48.550 Dataset Management (09h): Supported LBA-Change 00:14:48.550 Unknown (0Ch): Supported 00:14:48.550 Unknown (12h): Supported 00:14:48.550 Copy (19h): Supported LBA-Change 00:14:48.550 Unknown (1Dh): Supported LBA-Change 00:14:48.550 00:14:48.550 Error Log 00:14:48.550 ========= 00:14:48.550 00:14:48.550 Arbitration 00:14:48.550 =========== 00:14:48.550 Arbitration Burst: no limit 00:14:48.550 00:14:48.550 Power Management 00:14:48.550 ================ 00:14:48.550 Number of Power States: 1 00:14:48.550 Current Power State: Power State #0 00:14:48.550 Power State #0: 00:14:48.550 Max Power: 25.00 W 00:14:48.550 Non-Operational State: Operational 00:14:48.550 Entry Latency: 16 microseconds 00:14:48.550 Exit Latency: 4 microseconds 00:14:48.550 Relative Read Throughput: 0 00:14:48.550 Relative Read Latency: 0 00:14:48.550 Relative Write Throughput: 0 00:14:48.550 Relative Write Latency: 0 00:14:48.550 Idle Power: Not Reported 00:14:48.550 Active Power: Not Reported 00:14:48.550 Non-Operational Permissive Mode: Not Supported 00:14:48.550 00:14:48.550 Health Information 00:14:48.550 ================== 00:14:48.550 Critical Warnings: 00:14:48.550 Available Spare Space: OK 00:14:48.550 Temperature: OK 00:14:48.550 Device Reliability: OK 00:14:48.550 Read Only: No 00:14:48.550 Volatile Memory Backup: OK 00:14:48.550 Current Temperature: 323 Kelvin (50 Celsius) 00:14:48.550 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:48.550 Available Spare: 0% 00:14:48.550 Available Spare Threshold: 0% 00:14:48.550 Life Percentage Used: 0% 00:14:48.550 Data Units Read: 2062 00:14:48.550 Data Units Written: 1850 00:14:48.550 Host Read Commands: 98072 00:14:48.550 Host Write Commands: 96344 00:14:48.550 Controller Busy Time: 0 minutes 00:14:48.550 Power Cycles: 0 00:14:48.550 Power On Hours: 0 hours 00:14:48.550 Unsafe Shutdowns: 0 00:14:48.550 Unrecoverable Media Errors: 0 00:14:48.550 Lifetime Error Log Entries: 0 00:14:48.550 Warning Temperature Time: 0 minutes 00:14:48.550 Critical Temperature Time: 0 minutes 00:14:48.550 00:14:48.550 Number of Queues 00:14:48.550 ================ 00:14:48.550 Number of I/O Submission Queues: 64 00:14:48.550 Number of I/O Completion Queues: 64 00:14:48.550 00:14:48.550 ZNS Specific Controller Data 00:14:48.550 ============================ 00:14:48.550 Zone Append Size Limit: 0 00:14:48.550 00:14:48.550 00:14:48.550 Active Namespaces 00:14:48.550 ================= 00:14:48.550 Namespace ID:1 00:14:48.550 Error Recovery Timeout: Unlimited 00:14:48.550 Command Set Identifier: NVM (00h) 00:14:48.550 Deallocate: Supported 00:14:48.550 Deallocated/Unwritten Error: Supported 00:14:48.550 Deallocated Read Value: All 0x00 00:14:48.550 Deallocate in Write Zeroes: Not Supported 00:14:48.550 Deallocated Guard Field: 0xFFFF 00:14:48.550 Flush: Supported 00:14:48.550 Reservation: Not Supported 00:14:48.550 Namespace Sharing Capabilities: Private 00:14:48.550 Size (in LBAs): 1048576 (4GiB) 00:14:48.550 Capacity (in LBAs): 1048576 (4GiB) 00:14:48.550 Utilization (in LBAs): 1048576 (4GiB) 00:14:48.550 Thin Provisioning: Not Supported 00:14:48.550 Per-NS Atomic Units: No 00:14:48.550 Maximum Single Source Range Length: 128 00:14:48.550 Maximum Copy Length: 128 00:14:48.550 Maximum Source Range Count: 128 00:14:48.550 NGUID/EUI64 Never Reused: No 00:14:48.550 Namespace Write Protected: No 00:14:48.550 Number of LBA Formats: 8 00:14:48.550 Current LBA Format: LBA Format #04 00:14:48.550 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.550 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:48.550 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:48.550 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:48.550 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:48.550 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:48.550 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:48.550 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:48.550 00:14:48.550 NVM Specific Namespace Data 00:14:48.550 =========================== 00:14:48.550 Logical Block Storage Tag Mask: 0 00:14:48.550 Protection Information Capabilities: 00:14:48.550 16b Guard Protection Information Storage Tag Support: No 00:14:48.550 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:48.550 Storage Tag Check Read Support: No 00:14:48.550 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.550 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Namespace ID:2 00:14:48.551 Error Recovery Timeout: Unlimited 00:14:48.551 Command Set Identifier: NVM (00h) 00:14:48.551 Deallocate: Supported 00:14:48.551 Deallocated/Unwritten Error: Supported 00:14:48.551 Deallocated Read Value: All 0x00 00:14:48.551 Deallocate in Write Zeroes: Not Supported 00:14:48.551 Deallocated Guard Field: 0xFFFF 00:14:48.551 Flush: Supported 00:14:48.551 Reservation: Not Supported 00:14:48.551 Namespace Sharing Capabilities: Private 00:14:48.551 Size (in LBAs): 1048576 (4GiB) 00:14:48.551 Capacity (in LBAs): 1048576 (4GiB) 00:14:48.551 Utilization (in LBAs): 1048576 (4GiB) 00:14:48.551 Thin Provisioning: Not Supported 00:14:48.551 Per-NS Atomic Units: No 00:14:48.551 Maximum Single Source Range Length: 128 00:14:48.551 Maximum Copy Length: 128 00:14:48.551 Maximum Source Range Count: 128 00:14:48.551 NGUID/EUI64 Never Reused: No 00:14:48.551 Namespace Write Protected: No 00:14:48.551 Number of LBA Formats: 8 00:14:48.551 Current LBA Format: LBA Format #04 00:14:48.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:48.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:48.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:48.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:48.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:48.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:48.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:48.551 00:14:48.551 NVM Specific Namespace Data 00:14:48.551 =========================== 00:14:48.551 Logical Block Storage Tag Mask: 0 00:14:48.551 Protection Information Capabilities: 00:14:48.551 16b Guard Protection Information Storage Tag Support: No 00:14:48.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:48.551 Storage Tag Check Read Support: No 00:14:48.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Namespace ID:3 00:14:48.551 Error Recovery Timeout: Unlimited 00:14:48.551 Command Set Identifier: NVM (00h) 00:14:48.551 Deallocate: Supported 00:14:48.551 Deallocated/Unwritten Error: Supported 00:14:48.551 Deallocated Read Value: All 0x00 00:14:48.551 Deallocate in Write Zeroes: Not Supported 00:14:48.551 Deallocated Guard Field: 0xFFFF 00:14:48.551 Flush: Supported 00:14:48.551 Reservation: Not Supported 00:14:48.551 Namespace Sharing Capabilities: Private 00:14:48.551 Size (in LBAs): 1048576 (4GiB) 00:14:48.551 Capacity (in LBAs): 1048576 (4GiB) 00:14:48.551 Utilization (in LBAs): 1048576 (4GiB) 00:14:48.551 Thin Provisioning: Not Supported 00:14:48.551 Per-NS Atomic Units: No 00:14:48.551 Maximum Single Source Range Length: 128 00:14:48.551 Maximum Copy Length: 128 00:14:48.551 Maximum Source Range Count: 128 00:14:48.551 NGUID/EUI64 Never Reused: No 00:14:48.551 Namespace Write Protected: No 00:14:48.551 Number of LBA Formats: 8 00:14:48.551 Current LBA Format: LBA Format #04 00:14:48.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:48.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:48.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:48.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:48.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:48.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:48.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:48.551 00:14:48.551 NVM Specific Namespace Data 00:14:48.551 =========================== 00:14:48.551 Logical Block Storage Tag Mask: 0 00:14:48.551 Protection Information Capabilities: 00:14:48.551 16b Guard Protection Information Storage Tag Support: No 00:14:48.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:48.551 Storage Tag Check Read Support: No 00:14:48.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.551 13:34:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:48.551 13:34:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:14:48.810 ===================================================== 00:14:48.810 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:48.810 ===================================================== 00:14:48.810 Controller Capabilities/Features 00:14:48.810 ================================ 00:14:48.810 Vendor ID: 1b36 00:14:48.810 Subsystem Vendor ID: 1af4 00:14:48.810 Serial Number: 12343 00:14:48.810 Model Number: QEMU NVMe Ctrl 00:14:48.810 Firmware Version: 8.0.0 00:14:48.810 Recommended Arb Burst: 6 00:14:48.810 IEEE OUI Identifier: 00 54 52 00:14:48.810 Multi-path I/O 00:14:48.810 May have multiple subsystem ports: No 00:14:48.810 May have multiple controllers: Yes 00:14:48.810 Associated with SR-IOV VF: No 00:14:48.810 Max Data Transfer Size: 524288 00:14:48.810 Max Number of Namespaces: 256 00:14:48.810 Max Number of I/O Queues: 64 00:14:48.810 NVMe Specification Version (VS): 1.4 00:14:48.810 NVMe Specification Version (Identify): 1.4 00:14:48.810 Maximum Queue Entries: 2048 00:14:48.810 Contiguous Queues Required: Yes 00:14:48.810 Arbitration Mechanisms Supported 00:14:48.810 Weighted Round Robin: Not Supported 00:14:48.810 Vendor Specific: Not Supported 00:14:48.810 Reset Timeout: 7500 ms 00:14:48.810 Doorbell Stride: 4 bytes 00:14:48.810 NVM Subsystem Reset: Not Supported 00:14:48.810 Command Sets Supported 00:14:48.810 NVM Command Set: Supported 00:14:48.810 Boot Partition: Not Supported 00:14:48.810 Memory Page Size Minimum: 4096 bytes 00:14:48.810 Memory Page Size Maximum: 65536 bytes 00:14:48.810 Persistent Memory Region: Not Supported 00:14:48.810 Optional Asynchronous Events Supported 00:14:48.810 Namespace Attribute Notices: Supported 00:14:48.810 Firmware Activation Notices: Not Supported 00:14:48.810 ANA Change Notices: Not Supported 00:14:48.810 PLE Aggregate Log Change Notices: Not Supported 00:14:48.810 LBA Status Info Alert Notices: Not Supported 00:14:48.810 EGE Aggregate Log Change Notices: Not Supported 00:14:48.810 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.810 Zone Descriptor Change Notices: Not Supported 00:14:48.810 Discovery Log Change Notices: Not Supported 00:14:48.810 Controller Attributes 00:14:48.810 128-bit Host Identifier: Not Supported 00:14:48.810 Non-Operational Permissive Mode: Not Supported 00:14:48.810 NVM Sets: Not Supported 00:14:48.810 Read Recovery Levels: Not Supported 00:14:48.810 Endurance Groups: Supported 00:14:48.810 Predictable Latency Mode: Not Supported 00:14:48.810 Traffic Based Keep ALive: Not Supported 00:14:48.810 Namespace Granularity: Not Supported 00:14:48.810 SQ Associations: Not Supported 00:14:48.810 UUID List: Not Supported 00:14:48.810 Multi-Domain Subsystem: Not Supported 00:14:48.810 Fixed Capacity Management: Not Supported 00:14:48.810 Variable Capacity Management: Not Supported 00:14:48.810 Delete Endurance Group: Not Supported 00:14:48.810 Delete NVM Set: Not Supported 00:14:48.810 Extended LBA Formats Supported: Supported 00:14:48.810 Flexible Data Placement Supported: Supported 00:14:48.810 00:14:48.810 Controller Memory Buffer Support 00:14:48.810 ================================ 00:14:48.810 Supported: No 00:14:48.810 00:14:48.810 Persistent Memory Region Support 00:14:48.810 ================================ 00:14:48.810 Supported: No 00:14:48.810 00:14:48.810 Admin Command Set Attributes 00:14:48.810 ============================ 00:14:48.810 Security Send/Receive: Not Supported 00:14:48.810 Format NVM: Supported 00:14:48.810 Firmware Activate/Download: Not Supported 00:14:48.810 Namespace Management: Supported 00:14:48.810 Device Self-Test: Not Supported 00:14:48.810 Directives: Supported 00:14:48.810 NVMe-MI: Not Supported 00:14:48.810 Virtualization Management: Not Supported 00:14:48.810 Doorbell Buffer Config: Supported 00:14:48.810 Get LBA Status Capability: Not Supported 00:14:48.810 Command & Feature Lockdown Capability: Not Supported 00:14:48.810 Abort Command Limit: 4 00:14:48.810 Async Event Request Limit: 4 00:14:48.810 Number of Firmware Slots: N/A 00:14:48.810 Firmware Slot 1 Read-Only: N/A 00:14:48.810 Firmware Activation Without Reset: N/A 00:14:48.810 Multiple Update Detection Support: N/A 00:14:48.810 Firmware Update Granularity: No Information Provided 00:14:48.810 Per-Namespace SMART Log: Yes 00:14:48.810 Asymmetric Namespace Access Log Page: Not Supported 00:14:48.810 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:48.810 Command Effects Log Page: Supported 00:14:48.810 Get Log Page Extended Data: Supported 00:14:48.810 Telemetry Log Pages: Not Supported 00:14:48.810 Persistent Event Log Pages: Not Supported 00:14:48.810 Supported Log Pages Log Page: May Support 00:14:48.810 Commands Supported & Effects Log Page: Not Supported 00:14:48.810 Feature Identifiers & Effects Log Page:May Support 00:14:48.810 NVMe-MI Commands & Effects Log Page: May Support 00:14:48.810 Data Area 4 for Telemetry Log: Not Supported 00:14:48.810 Error Log Page Entries Supported: 1 00:14:48.810 Keep Alive: Not Supported 00:14:48.810 00:14:48.810 NVM Command Set Attributes 00:14:48.810 ========================== 00:14:48.810 Submission Queue Entry Size 00:14:48.810 Max: 64 00:14:48.810 Min: 64 00:14:48.810 Completion Queue Entry Size 00:14:48.810 Max: 16 00:14:48.810 Min: 16 00:14:48.810 Number of Namespaces: 256 00:14:48.810 Compare Command: Supported 00:14:48.810 Write Uncorrectable Command: Not Supported 00:14:48.810 Dataset Management Command: Supported 00:14:48.810 Write Zeroes Command: Supported 00:14:48.810 Set Features Save Field: Supported 00:14:48.810 Reservations: Not Supported 00:14:48.810 Timestamp: Supported 00:14:48.810 Copy: Supported 00:14:48.810 Volatile Write Cache: Present 00:14:48.810 Atomic Write Unit (Normal): 1 00:14:48.810 Atomic Write Unit (PFail): 1 00:14:48.810 Atomic Compare & Write Unit: 1 00:14:48.810 Fused Compare & Write: Not Supported 00:14:48.810 Scatter-Gather List 00:14:48.810 SGL Command Set: Supported 00:14:48.810 SGL Keyed: Not Supported 00:14:48.810 SGL Bit Bucket Descriptor: Not Supported 00:14:48.810 SGL Metadata Pointer: Not Supported 00:14:48.810 Oversized SGL: Not Supported 00:14:48.810 SGL Metadata Address: Not Supported 00:14:48.810 SGL Offset: Not Supported 00:14:48.810 Transport SGL Data Block: Not Supported 00:14:48.810 Replay Protected Memory Block: Not Supported 00:14:48.810 00:14:48.810 Firmware Slot Information 00:14:48.810 ========================= 00:14:48.810 Active slot: 1 00:14:48.810 Slot 1 Firmware Revision: 1.0 00:14:48.810 00:14:48.811 00:14:48.811 Commands Supported and Effects 00:14:48.811 ============================== 00:14:48.811 Admin Commands 00:14:48.811 -------------- 00:14:48.811 Delete I/O Submission Queue (00h): Supported 00:14:48.811 Create I/O Submission Queue (01h): Supported 00:14:48.811 Get Log Page (02h): Supported 00:14:48.811 Delete I/O Completion Queue (04h): Supported 00:14:48.811 Create I/O Completion Queue (05h): Supported 00:14:48.811 Identify (06h): Supported 00:14:48.811 Abort (08h): Supported 00:14:48.811 Set Features (09h): Supported 00:14:48.811 Get Features (0Ah): Supported 00:14:48.811 Asynchronous Event Request (0Ch): Supported 00:14:48.811 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:48.811 Directive Send (19h): Supported 00:14:48.811 Directive Receive (1Ah): Supported 00:14:48.811 Virtualization Management (1Ch): Supported 00:14:48.811 Doorbell Buffer Config (7Ch): Supported 00:14:48.811 Format NVM (80h): Supported LBA-Change 00:14:48.811 I/O Commands 00:14:48.811 ------------ 00:14:48.811 Flush (00h): Supported LBA-Change 00:14:48.811 Write (01h): Supported LBA-Change 00:14:48.811 Read (02h): Supported 00:14:48.811 Compare (05h): Supported 00:14:48.811 Write Zeroes (08h): Supported LBA-Change 00:14:48.811 Dataset Management (09h): Supported LBA-Change 00:14:48.811 Unknown (0Ch): Supported 00:14:48.811 Unknown (12h): Supported 00:14:48.811 Copy (19h): Supported LBA-Change 00:14:48.811 Unknown (1Dh): Supported LBA-Change 00:14:48.811 00:14:48.811 Error Log 00:14:48.811 ========= 00:14:48.811 00:14:48.811 Arbitration 00:14:48.811 =========== 00:14:48.811 Arbitration Burst: no limit 00:14:48.811 00:14:48.811 Power Management 00:14:48.811 ================ 00:14:48.811 Number of Power States: 1 00:14:48.811 Current Power State: Power State #0 00:14:48.811 Power State #0: 00:14:48.811 Max Power: 25.00 W 00:14:48.811 Non-Operational State: Operational 00:14:48.811 Entry Latency: 16 microseconds 00:14:48.811 Exit Latency: 4 microseconds 00:14:48.811 Relative Read Throughput: 0 00:14:48.811 Relative Read Latency: 0 00:14:48.811 Relative Write Throughput: 0 00:14:48.811 Relative Write Latency: 0 00:14:48.811 Idle Power: Not Reported 00:14:48.811 Active Power: Not Reported 00:14:48.811 Non-Operational Permissive Mode: Not Supported 00:14:48.811 00:14:48.811 Health Information 00:14:48.811 ================== 00:14:48.811 Critical Warnings: 00:14:48.811 Available Spare Space: OK 00:14:48.811 Temperature: OK 00:14:48.811 Device Reliability: OK 00:14:48.811 Read Only: No 00:14:48.811 Volatile Memory Backup: OK 00:14:48.811 Current Temperature: 323 Kelvin (50 Celsius) 00:14:48.811 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:48.811 Available Spare: 0% 00:14:48.811 Available Spare Threshold: 0% 00:14:48.811 Life Percentage Used: 0% 00:14:48.811 Data Units Read: 691 00:14:48.811 Data Units Written: 620 00:14:48.811 Host Read Commands: 32718 00:14:48.811 Host Write Commands: 32141 00:14:48.811 Controller Busy Time: 0 minutes 00:14:48.811 Power Cycles: 0 00:14:48.811 Power On Hours: 0 hours 00:14:48.811 Unsafe Shutdowns: 0 00:14:48.811 Unrecoverable Media Errors: 0 00:14:48.811 Lifetime Error Log Entries: 0 00:14:48.811 Warning Temperature Time: 0 minutes 00:14:48.811 Critical Temperature Time: 0 minutes 00:14:48.811 00:14:48.811 Number of Queues 00:14:48.811 ================ 00:14:48.811 Number of I/O Submission Queues: 64 00:14:48.811 Number of I/O Completion Queues: 64 00:14:48.811 00:14:48.811 ZNS Specific Controller Data 00:14:48.811 ============================ 00:14:48.811 Zone Append Size Limit: 0 00:14:48.811 00:14:48.811 00:14:48.811 Active Namespaces 00:14:48.811 ================= 00:14:48.811 Namespace ID:1 00:14:48.811 Error Recovery Timeout: Unlimited 00:14:48.811 Command Set Identifier: NVM (00h) 00:14:48.811 Deallocate: Supported 00:14:48.811 Deallocated/Unwritten Error: Supported 00:14:48.811 Deallocated Read Value: All 0x00 00:14:48.811 Deallocate in Write Zeroes: Not Supported 00:14:48.811 Deallocated Guard Field: 0xFFFF 00:14:48.811 Flush: Supported 00:14:48.811 Reservation: Not Supported 00:14:48.811 Namespace Sharing Capabilities: Multiple Controllers 00:14:48.811 Size (in LBAs): 262144 (1GiB) 00:14:48.811 Capacity (in LBAs): 262144 (1GiB) 00:14:48.811 Utilization (in LBAs): 262144 (1GiB) 00:14:48.811 Thin Provisioning: Not Supported 00:14:48.811 Per-NS Atomic Units: No 00:14:48.811 Maximum Single Source Range Length: 128 00:14:48.811 Maximum Copy Length: 128 00:14:48.811 Maximum Source Range Count: 128 00:14:48.811 NGUID/EUI64 Never Reused: No 00:14:48.811 Namespace Write Protected: No 00:14:48.811 Endurance group ID: 1 00:14:48.811 Number of LBA Formats: 8 00:14:48.811 Current LBA Format: LBA Format #04 00:14:48.811 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.811 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:48.811 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:48.811 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:48.811 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:48.811 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:48.811 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:48.811 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:48.811 00:14:48.811 Get Feature FDP: 00:14:48.811 ================ 00:14:48.811 Enabled: Yes 00:14:48.811 FDP configuration index: 0 00:14:48.811 00:14:48.811 FDP configurations log page 00:14:48.811 =========================== 00:14:48.811 Number of FDP configurations: 1 00:14:48.811 Version: 0 00:14:48.811 Size: 112 00:14:48.811 FDP Configuration Descriptor: 0 00:14:48.811 Descriptor Size: 96 00:14:48.811 Reclaim Group Identifier format: 2 00:14:48.811 FDP Volatile Write Cache: Not Present 00:14:48.811 FDP Configuration: Valid 00:14:48.811 Vendor Specific Size: 0 00:14:48.811 Number of Reclaim Groups: 2 00:14:48.811 Number of Recalim Unit Handles: 8 00:14:48.811 Max Placement Identifiers: 128 00:14:48.811 Number of Namespaces Suppprted: 256 00:14:48.811 Reclaim unit Nominal Size: 6000000 bytes 00:14:48.811 Estimated Reclaim Unit Time Limit: Not Reported 00:14:48.811 RUH Desc #000: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #001: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #002: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #003: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #004: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #005: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #006: RUH Type: Initially Isolated 00:14:48.811 RUH Desc #007: RUH Type: Initially Isolated 00:14:48.811 00:14:48.811 FDP reclaim unit handle usage log page 00:14:48.811 ====================================== 00:14:48.811 Number of Reclaim Unit Handles: 8 00:14:48.811 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:48.811 RUH Usage Desc #001: RUH Attributes: Unused 00:14:48.811 RUH Usage Desc #002: RUH Attributes: Unused 00:14:48.812 RUH Usage Desc #003: RUH Attributes: Unused 00:14:48.812 RUH Usage Desc #004: RUH Attributes: Unused 00:14:48.812 RUH Usage Desc #005: RUH Attributes: Unused 00:14:48.812 RUH Usage Desc #006: RUH Attributes: Unused 00:14:48.812 RUH Usage Desc #007: RUH Attributes: Unused 00:14:48.812 00:14:48.812 FDP statistics log page 00:14:48.812 ======================= 00:14:48.812 Host bytes with metadata written: 385101824 00:14:48.812 Media bytes with metadata written: 385171456 00:14:48.812 Media bytes erased: 0 00:14:48.812 00:14:48.812 FDP events log page 00:14:48.812 =================== 00:14:48.812 Number of FDP events: 0 00:14:48.812 00:14:48.812 NVM Specific Namespace Data 00:14:48.812 =========================== 00:14:48.812 Logical Block Storage Tag Mask: 0 00:14:48.812 Protection Information Capabilities: 00:14:48.812 16b Guard Protection Information Storage Tag Support: No 00:14:48.812 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:48.812 Storage Tag Check Read Support: No 00:14:48.812 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:48.812 00:14:48.812 real 0m2.024s 00:14:48.812 user 0m0.835s 00:14:48.812 sys 0m0.973s 00:14:48.812 13:34:40 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.812 13:34:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:14:48.812 ************************************ 00:14:48.812 END TEST nvme_identify 00:14:48.812 ************************************ 00:14:48.812 13:34:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:14:48.812 13:34:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.812 13:34:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.812 13:34:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.812 ************************************ 00:14:48.812 START TEST nvme_perf 00:14:48.812 ************************************ 00:14:48.812 13:34:40 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:14:48.812 13:34:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:14:50.184 Initializing NVMe Controllers 00:14:50.184 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.184 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.184 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.184 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.184 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:50.184 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:50.184 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:50.184 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:50.184 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:50.184 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:50.184 Initialization complete. Launching workers. 00:14:50.184 ======================================================== 00:14:50.184 Latency(us) 00:14:50.184 Device Information : IOPS MiB/s Average min max 00:14:50.184 PCIE (0000:00:10.0) NSID 1 from core 0: 9721.98 113.93 13199.74 7470.27 40864.89 00:14:50.184 PCIE (0000:00:11.0) NSID 1 from core 0: 9721.98 113.93 13174.69 7589.25 38367.98 00:14:50.184 PCIE (0000:00:13.0) NSID 1 from core 0: 9721.98 113.93 13153.69 7535.24 37123.93 00:14:50.184 PCIE (0000:00:12.0) NSID 1 from core 0: 9721.98 113.93 13132.26 7569.93 35674.65 00:14:50.184 PCIE (0000:00:12.0) NSID 2 from core 0: 9721.98 113.93 13107.09 7574.88 34265.33 00:14:50.184 PCIE (0000:00:12.0) NSID 3 from core 0: 9721.98 113.93 13077.48 7537.68 31822.22 00:14:50.184 ======================================================== 00:14:50.184 Total : 58331.89 683.58 13140.83 7470.27 40864.89 00:14:50.184 00:14:50.184 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:50.184 ================================================================================= 00:14:50.184 1.00000% : 8043.055us 00:14:50.184 10.00000% : 8996.305us 00:14:50.184 25.00000% : 10247.447us 00:14:50.184 50.00000% : 11617.745us 00:14:50.184 75.00000% : 16681.891us 00:14:50.184 90.00000% : 18707.549us 00:14:50.184 95.00000% : 19660.800us 00:14:50.184 98.00000% : 20971.520us 00:14:50.184 99.00000% : 27882.589us 00:14:50.184 99.50000% : 38368.349us 00:14:50.184 99.90000% : 40513.164us 00:14:50.184 99.99000% : 40989.789us 00:14:50.184 99.99900% : 40989.789us 00:14:50.184 99.99990% : 40989.789us 00:14:50.184 99.99999% : 40989.789us 00:14:50.184 00:14:50.184 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:50.184 ================================================================================= 00:14:50.184 1.00000% : 8102.633us 00:14:50.184 10.00000% : 9055.884us 00:14:50.184 25.00000% : 10247.447us 00:14:50.184 50.00000% : 11677.324us 00:14:50.184 75.00000% : 16801.047us 00:14:50.184 90.00000% : 18707.549us 00:14:50.184 95.00000% : 19541.644us 00:14:50.184 98.00000% : 20733.207us 00:14:50.184 99.00000% : 26333.556us 00:14:50.184 99.50000% : 36461.847us 00:14:50.184 99.90000% : 38130.036us 00:14:50.184 99.99000% : 38368.349us 00:14:50.184 99.99900% : 38368.349us 00:14:50.184 99.99990% : 38368.349us 00:14:50.184 99.99999% : 38368.349us 00:14:50.184 00:14:50.184 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:50.184 ================================================================================= 00:14:50.184 1.00000% : 8102.633us 00:14:50.184 10.00000% : 9115.462us 00:14:50.184 25.00000% : 10247.447us 00:14:50.184 50.00000% : 11677.324us 00:14:50.184 75.00000% : 16801.047us 00:14:50.184 90.00000% : 18707.549us 00:14:50.184 95.00000% : 19422.487us 00:14:50.184 98.00000% : 20494.895us 00:14:50.184 99.00000% : 25261.149us 00:14:50.184 99.50000% : 35031.971us 00:14:50.184 99.90000% : 36938.473us 00:14:50.184 99.99000% : 37176.785us 00:14:50.184 99.99900% : 37176.785us 00:14:50.184 99.99990% : 37176.785us 00:14:50.184 99.99999% : 37176.785us 00:14:50.184 00:14:50.184 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:50.184 ================================================================================= 00:14:50.184 1.00000% : 8102.633us 00:14:50.184 10.00000% : 9055.884us 00:14:50.184 25.00000% : 10247.447us 00:14:50.184 50.00000% : 11677.324us 00:14:50.184 75.00000% : 16801.047us 00:14:50.184 90.00000% : 18707.549us 00:14:50.184 95.00000% : 19422.487us 00:14:50.184 98.00000% : 20494.895us 00:14:50.184 99.00000% : 23712.116us 00:14:50.184 99.50000% : 33602.095us 00:14:50.184 99.90000% : 35508.596us 00:14:50.184 99.99000% : 35746.909us 00:14:50.184 99.99900% : 35746.909us 00:14:50.184 99.99990% : 35746.909us 00:14:50.184 99.99999% : 35746.909us 00:14:50.184 00:14:50.184 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:50.184 ================================================================================= 00:14:50.184 1.00000% : 8162.211us 00:14:50.184 10.00000% : 9055.884us 00:14:50.184 25.00000% : 10247.447us 00:14:50.184 50.00000% : 11736.902us 00:14:50.184 75.00000% : 16681.891us 00:14:50.185 90.00000% : 18707.549us 00:14:50.185 95.00000% : 19541.644us 00:14:50.185 98.00000% : 20733.207us 00:14:50.185 99.00000% : 22163.084us 00:14:50.185 99.50000% : 32172.218us 00:14:50.185 99.90000% : 34078.720us 00:14:50.185 99.99000% : 34317.033us 00:14:50.185 99.99900% : 34317.033us 00:14:50.185 99.99990% : 34317.033us 00:14:50.185 99.99999% : 34317.033us 00:14:50.185 00:14:50.185 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:50.185 ================================================================================= 00:14:50.185 1.00000% : 8102.633us 00:14:50.185 10.00000% : 8996.305us 00:14:50.185 25.00000% : 10247.447us 00:14:50.185 50.00000% : 11677.324us 00:14:50.185 75.00000% : 16681.891us 00:14:50.185 90.00000% : 18707.549us 00:14:50.185 95.00000% : 19541.644us 00:14:50.185 98.00000% : 20614.051us 00:14:50.185 99.00000% : 21209.833us 00:14:50.185 99.50000% : 29669.935us 00:14:50.185 99.90000% : 31457.280us 00:14:50.185 99.99000% : 31933.905us 00:14:50.185 99.99900% : 31933.905us 00:14:50.185 99.99990% : 31933.905us 00:14:50.185 99.99999% : 31933.905us 00:14:50.185 00:14:50.185 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:50.185 ============================================================================== 00:14:50.185 Range in us Cumulative IO count 00:14:50.185 7447.273 - 7477.062: 0.0103% ( 1) 00:14:50.185 7477.062 - 7506.851: 0.0206% ( 1) 00:14:50.185 7506.851 - 7536.640: 0.0514% ( 3) 00:14:50.185 7536.640 - 7566.429: 0.0925% ( 4) 00:14:50.185 7566.429 - 7596.218: 0.1131% ( 2) 00:14:50.185 7596.218 - 7626.007: 0.1336% ( 2) 00:14:50.185 7626.007 - 7685.585: 0.2056% ( 7) 00:14:50.185 7685.585 - 7745.164: 0.2775% ( 7) 00:14:50.185 7745.164 - 7804.742: 0.4009% ( 12) 00:14:50.185 7804.742 - 7864.320: 0.5243% ( 12) 00:14:50.185 7864.320 - 7923.898: 0.6887% ( 16) 00:14:50.185 7923.898 - 7983.476: 0.8532% ( 16) 00:14:50.185 7983.476 - 8043.055: 1.0382% ( 18) 00:14:50.185 8043.055 - 8102.633: 1.2952% ( 25) 00:14:50.185 8102.633 - 8162.211: 1.6345% ( 33) 00:14:50.185 8162.211 - 8221.789: 1.9840% ( 34) 00:14:50.185 8221.789 - 8281.367: 2.4363% ( 44) 00:14:50.185 8281.367 - 8340.945: 2.9400% ( 49) 00:14:50.185 8340.945 - 8400.524: 3.5670% ( 61) 00:14:50.185 8400.524 - 8460.102: 4.1427% ( 56) 00:14:50.185 8460.102 - 8519.680: 4.8314% ( 67) 00:14:50.185 8519.680 - 8579.258: 5.4585% ( 61) 00:14:50.185 8579.258 - 8638.836: 6.1575% ( 68) 00:14:50.185 8638.836 - 8698.415: 6.8462% ( 67) 00:14:50.185 8698.415 - 8757.993: 7.5041% ( 64) 00:14:50.185 8757.993 - 8817.571: 8.1826% ( 66) 00:14:50.185 8817.571 - 8877.149: 8.7479% ( 55) 00:14:50.185 8877.149 - 8936.727: 9.3853% ( 62) 00:14:50.185 8936.727 - 8996.305: 10.0021% ( 60) 00:14:50.185 8996.305 - 9055.884: 10.5880% ( 57) 00:14:50.185 9055.884 - 9115.462: 11.1534% ( 55) 00:14:50.185 9115.462 - 9175.040: 11.6674% ( 50) 00:14:50.185 9175.040 - 9234.618: 12.3150% ( 63) 00:14:50.185 9234.618 - 9294.196: 12.8803% ( 55) 00:14:50.185 9294.196 - 9353.775: 13.4354% ( 54) 00:14:50.185 9353.775 - 9413.353: 14.0214% ( 57) 00:14:50.185 9413.353 - 9472.931: 14.5559% ( 52) 00:14:50.185 9472.931 - 9532.509: 15.1521% ( 58) 00:14:50.185 9532.509 - 9592.087: 15.5428% ( 38) 00:14:50.185 9592.087 - 9651.665: 16.0156% ( 46) 00:14:50.185 9651.665 - 9711.244: 16.6632% ( 63) 00:14:50.185 9711.244 - 9770.822: 17.4548% ( 77) 00:14:50.185 9770.822 - 9830.400: 18.2566% ( 78) 00:14:50.185 9830.400 - 9889.978: 19.1406% ( 86) 00:14:50.185 9889.978 - 9949.556: 20.1583% ( 99) 00:14:50.185 9949.556 - 10009.135: 21.1657% ( 98) 00:14:50.185 10009.135 - 10068.713: 22.3170% ( 112) 00:14:50.185 10068.713 - 10128.291: 23.5197% ( 117) 00:14:50.185 10128.291 - 10187.869: 24.7636% ( 121) 00:14:50.185 10187.869 - 10247.447: 25.9046% ( 111) 00:14:50.185 10247.447 - 10307.025: 27.1690% ( 123) 00:14:50.185 10307.025 - 10366.604: 28.3409% ( 114) 00:14:50.185 10366.604 - 10426.182: 29.6567% ( 128) 00:14:50.185 10426.182 - 10485.760: 30.8491% ( 116) 00:14:50.185 10485.760 - 10545.338: 32.0724% ( 119) 00:14:50.185 10545.338 - 10604.916: 33.4190% ( 131) 00:14:50.185 10604.916 - 10664.495: 34.6114% ( 116) 00:14:50.185 10664.495 - 10724.073: 35.8039% ( 116) 00:14:50.185 10724.073 - 10783.651: 37.1197% ( 128) 00:14:50.185 10783.651 - 10843.229: 38.4354% ( 128) 00:14:50.185 10843.229 - 10902.807: 39.6073% ( 114) 00:14:50.185 10902.807 - 10962.385: 40.7586% ( 112) 00:14:50.185 10962.385 - 11021.964: 41.8277% ( 104) 00:14:50.185 11021.964 - 11081.542: 42.7323% ( 88) 00:14:50.185 11081.542 - 11141.120: 43.5958% ( 84) 00:14:50.185 11141.120 - 11200.698: 44.5107% ( 89) 00:14:50.185 11200.698 - 11260.276: 45.4256% ( 89) 00:14:50.185 11260.276 - 11319.855: 46.2993% ( 85) 00:14:50.185 11319.855 - 11379.433: 47.0806% ( 76) 00:14:50.185 11379.433 - 11439.011: 47.8721% ( 77) 00:14:50.185 11439.011 - 11498.589: 48.6534% ( 76) 00:14:50.185 11498.589 - 11558.167: 49.3421% ( 67) 00:14:50.185 11558.167 - 11617.745: 50.0514% ( 69) 00:14:50.185 11617.745 - 11677.324: 50.7710% ( 70) 00:14:50.185 11677.324 - 11736.902: 51.4289% ( 64) 00:14:50.185 11736.902 - 11796.480: 52.0765% ( 63) 00:14:50.185 11796.480 - 11856.058: 52.7344% ( 64) 00:14:50.185 11856.058 - 11915.636: 53.3923% ( 64) 00:14:50.185 11915.636 - 11975.215: 54.0090% ( 60) 00:14:50.185 11975.215 - 12034.793: 54.6258% ( 60) 00:14:50.185 12034.793 - 12094.371: 55.2118% ( 57) 00:14:50.185 12094.371 - 12153.949: 55.7669% ( 54) 00:14:50.185 12153.949 - 12213.527: 56.3528% ( 57) 00:14:50.185 12213.527 - 12273.105: 56.9799% ( 61) 00:14:50.185 12273.105 - 12332.684: 57.4424% ( 45) 00:14:50.185 12332.684 - 12392.262: 57.9564% ( 50) 00:14:50.185 12392.262 - 12451.840: 58.5424% ( 57) 00:14:50.185 12451.840 - 12511.418: 58.9947% ( 44) 00:14:50.185 12511.418 - 12570.996: 59.5806% ( 57) 00:14:50.185 12570.996 - 12630.575: 60.0946% ( 50) 00:14:50.185 12630.575 - 12690.153: 60.6908% ( 58) 00:14:50.185 12690.153 - 12749.731: 61.2767% ( 57) 00:14:50.185 12749.731 - 12809.309: 61.9038% ( 61) 00:14:50.185 12809.309 - 12868.887: 62.4178% ( 50) 00:14:50.185 12868.887 - 12928.465: 63.0037% ( 57) 00:14:50.185 12928.465 - 12988.044: 63.4868% ( 47) 00:14:50.185 12988.044 - 13047.622: 63.9597% ( 46) 00:14:50.185 13047.622 - 13107.200: 64.2989% ( 33) 00:14:50.185 13107.200 - 13166.778: 64.5868% ( 28) 00:14:50.185 13166.778 - 13226.356: 64.7410% ( 15) 00:14:50.185 13226.356 - 13285.935: 64.9054% ( 16) 00:14:50.185 13285.935 - 13345.513: 65.1007% ( 19) 00:14:50.185 13345.513 - 13405.091: 65.2755% ( 17) 00:14:50.185 13405.091 - 13464.669: 65.4605% ( 18) 00:14:50.185 13464.669 - 13524.247: 65.6147% ( 15) 00:14:50.185 13524.247 - 13583.825: 65.7895% ( 17) 00:14:50.185 13583.825 - 13643.404: 65.9437% ( 15) 00:14:50.185 13643.404 - 13702.982: 66.1081% ( 16) 00:14:50.185 13702.982 - 13762.560: 66.2829% ( 17) 00:14:50.185 13762.560 - 13822.138: 66.4679% ( 18) 00:14:50.185 13822.138 - 13881.716: 66.6016% ( 13) 00:14:50.185 13881.716 - 13941.295: 66.7558% ( 15) 00:14:50.185 13941.295 - 14000.873: 66.8791% ( 12) 00:14:50.185 14000.873 - 14060.451: 67.0025% ( 12) 00:14:50.185 14060.451 - 14120.029: 67.0950% ( 9) 00:14:50.185 14120.029 - 14179.607: 67.1875% ( 9) 00:14:50.185 14179.607 - 14239.185: 67.3006% ( 11) 00:14:50.185 14239.185 - 14298.764: 67.3931% ( 9) 00:14:50.185 14298.764 - 14358.342: 67.4959% ( 10) 00:14:50.185 14358.342 - 14417.920: 67.5473% ( 5) 00:14:50.185 14417.920 - 14477.498: 67.6604% ( 11) 00:14:50.185 14477.498 - 14537.076: 67.7426% ( 8) 00:14:50.185 14537.076 - 14596.655: 67.8248% ( 8) 00:14:50.185 14596.655 - 14656.233: 67.9379% ( 11) 00:14:50.185 14656.233 - 14715.811: 67.9996% ( 6) 00:14:50.185 14715.811 - 14775.389: 68.0818% ( 8) 00:14:50.185 14775.389 - 14834.967: 68.1743% ( 9) 00:14:50.185 14834.967 - 14894.545: 68.2566% ( 8) 00:14:50.185 14894.545 - 14954.124: 68.3799% ( 12) 00:14:50.185 14954.124 - 15013.702: 68.4725% ( 9) 00:14:50.185 15013.702 - 15073.280: 68.5752% ( 10) 00:14:50.185 15073.280 - 15132.858: 68.7192% ( 14) 00:14:50.185 15132.858 - 15192.436: 68.8117% ( 9) 00:14:50.185 15192.436 - 15252.015: 68.9453% ( 13) 00:14:50.185 15252.015 - 15371.171: 69.1201% ( 17) 00:14:50.185 15371.171 - 15490.327: 69.3154% ( 19) 00:14:50.185 15490.327 - 15609.484: 69.5826% ( 26) 00:14:50.185 15609.484 - 15728.640: 69.9322% ( 34) 00:14:50.185 15728.640 - 15847.796: 70.3022% ( 36) 00:14:50.185 15847.796 - 15966.953: 70.7648% ( 45) 00:14:50.185 15966.953 - 16086.109: 71.2685% ( 49) 00:14:50.185 16086.109 - 16205.265: 71.8339% ( 55) 00:14:50.185 16205.265 - 16324.422: 72.5432% ( 69) 00:14:50.185 16324.422 - 16443.578: 73.4169% ( 85) 00:14:50.185 16443.578 - 16562.735: 74.3010% ( 86) 00:14:50.185 16562.735 - 16681.891: 75.1336% ( 81) 00:14:50.185 16681.891 - 16801.047: 76.0074% ( 85) 00:14:50.185 16801.047 - 16920.204: 76.9120% ( 88) 00:14:50.185 16920.204 - 17039.360: 77.7138% ( 78) 00:14:50.185 17039.360 - 17158.516: 78.5876% ( 85) 00:14:50.185 17158.516 - 17277.673: 79.5641% ( 95) 00:14:50.185 17277.673 - 17396.829: 80.4688% ( 88) 00:14:50.185 17396.829 - 17515.985: 81.3836% ( 89) 00:14:50.185 17515.985 - 17635.142: 82.2780% ( 87) 00:14:50.185 17635.142 - 17754.298: 83.2442% ( 94) 00:14:50.185 17754.298 - 17873.455: 84.1488% ( 88) 00:14:50.185 17873.455 - 17992.611: 85.1151% ( 94) 00:14:50.185 17992.611 - 18111.767: 86.0197% ( 88) 00:14:50.185 18111.767 - 18230.924: 86.9038% ( 86) 00:14:50.185 18230.924 - 18350.080: 87.7467% ( 82) 00:14:50.185 18350.080 - 18469.236: 88.5280% ( 76) 00:14:50.185 18469.236 - 18588.393: 89.4017% ( 85) 00:14:50.186 18588.393 - 18707.549: 90.1419% ( 72) 00:14:50.186 18707.549 - 18826.705: 90.9128% ( 75) 00:14:50.186 18826.705 - 18945.862: 91.6324% ( 70) 00:14:50.186 18945.862 - 19065.018: 92.4239% ( 77) 00:14:50.186 19065.018 - 19184.175: 93.1229% ( 68) 00:14:50.186 19184.175 - 19303.331: 93.7808% ( 64) 00:14:50.186 19303.331 - 19422.487: 94.3771% ( 58) 00:14:50.186 19422.487 - 19541.644: 94.9424% ( 55) 00:14:50.186 19541.644 - 19660.800: 95.2919% ( 34) 00:14:50.186 19660.800 - 19779.956: 95.5489% ( 25) 00:14:50.186 19779.956 - 19899.113: 95.8162% ( 26) 00:14:50.186 19899.113 - 20018.269: 96.1143% ( 29) 00:14:50.186 20018.269 - 20137.425: 96.4021% ( 28) 00:14:50.186 20137.425 - 20256.582: 96.6591% ( 25) 00:14:50.186 20256.582 - 20375.738: 96.8956% ( 23) 00:14:50.186 20375.738 - 20494.895: 97.1731% ( 27) 00:14:50.186 20494.895 - 20614.051: 97.4095% ( 23) 00:14:50.186 20614.051 - 20733.207: 97.6049% ( 19) 00:14:50.186 20733.207 - 20852.364: 97.8104% ( 20) 00:14:50.186 20852.364 - 20971.520: 98.0572% ( 24) 00:14:50.186 20971.520 - 21090.676: 98.2216% ( 16) 00:14:50.186 21090.676 - 21209.833: 98.3964% ( 17) 00:14:50.186 21209.833 - 21328.989: 98.5403% ( 14) 00:14:50.186 21328.989 - 21448.145: 98.6328% ( 9) 00:14:50.186 21448.145 - 21567.302: 98.6739% ( 4) 00:14:50.186 21567.302 - 21686.458: 98.6842% ( 1) 00:14:50.186 26452.713 - 26571.869: 98.6945% ( 1) 00:14:50.186 26571.869 - 26691.025: 98.7150% ( 2) 00:14:50.186 26691.025 - 26810.182: 98.7459% ( 3) 00:14:50.186 26810.182 - 26929.338: 98.7870% ( 4) 00:14:50.186 26929.338 - 27048.495: 98.8178% ( 3) 00:14:50.186 27048.495 - 27167.651: 98.8487% ( 3) 00:14:50.186 27167.651 - 27286.807: 98.8795% ( 3) 00:14:50.186 27286.807 - 27405.964: 98.9206% ( 4) 00:14:50.186 27405.964 - 27525.120: 98.9515% ( 3) 00:14:50.186 27525.120 - 27644.276: 98.9720% ( 2) 00:14:50.186 27644.276 - 27763.433: 98.9926% ( 2) 00:14:50.186 27763.433 - 27882.589: 99.0234% ( 3) 00:14:50.186 27882.589 - 28001.745: 99.0543% ( 3) 00:14:50.186 28001.745 - 28120.902: 99.0748% ( 2) 00:14:50.186 28120.902 - 28240.058: 99.1057% ( 3) 00:14:50.186 28240.058 - 28359.215: 99.1262% ( 2) 00:14:50.186 28359.215 - 28478.371: 99.1571% ( 3) 00:14:50.186 28478.371 - 28597.527: 99.1776% ( 2) 00:14:50.186 28597.527 - 28716.684: 99.1982% ( 2) 00:14:50.186 28716.684 - 28835.840: 99.2188% ( 2) 00:14:50.186 28835.840 - 28954.996: 99.2496% ( 3) 00:14:50.186 28954.996 - 29074.153: 99.2804% ( 3) 00:14:50.186 29074.153 - 29193.309: 99.3010% ( 2) 00:14:50.186 29193.309 - 29312.465: 99.3318% ( 3) 00:14:50.186 29312.465 - 29431.622: 99.3421% ( 1) 00:14:50.186 37176.785 - 37415.098: 99.3627% ( 2) 00:14:50.186 37415.098 - 37653.411: 99.4038% ( 4) 00:14:50.186 37653.411 - 37891.724: 99.4655% ( 6) 00:14:50.186 37891.724 - 38130.036: 99.4860% ( 2) 00:14:50.186 38130.036 - 38368.349: 99.5271% ( 4) 00:14:50.186 38368.349 - 38606.662: 99.5683% ( 4) 00:14:50.186 38606.662 - 38844.975: 99.6197% ( 5) 00:14:50.186 38844.975 - 39083.287: 99.6505% ( 3) 00:14:50.186 39083.287 - 39321.600: 99.6916% ( 4) 00:14:50.186 39321.600 - 39559.913: 99.7430% ( 5) 00:14:50.186 39559.913 - 39798.225: 99.7738% ( 3) 00:14:50.186 39798.225 - 40036.538: 99.8150% ( 4) 00:14:50.186 40036.538 - 40274.851: 99.8664% ( 5) 00:14:50.186 40274.851 - 40513.164: 99.9280% ( 6) 00:14:50.186 40513.164 - 40751.476: 99.9794% ( 5) 00:14:50.186 40751.476 - 40989.789: 100.0000% ( 2) 00:14:50.186 00:14:50.186 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:50.186 ============================================================================== 00:14:50.186 Range in us Cumulative IO count 00:14:50.186 7566.429 - 7596.218: 0.0103% ( 1) 00:14:50.186 7596.218 - 7626.007: 0.0308% ( 2) 00:14:50.186 7626.007 - 7685.585: 0.1028% ( 7) 00:14:50.186 7685.585 - 7745.164: 0.1850% ( 8) 00:14:50.186 7745.164 - 7804.742: 0.2673% ( 8) 00:14:50.186 7804.742 - 7864.320: 0.3701% ( 10) 00:14:50.186 7864.320 - 7923.898: 0.5140% ( 14) 00:14:50.186 7923.898 - 7983.476: 0.6887% ( 17) 00:14:50.186 7983.476 - 8043.055: 0.9046% ( 21) 00:14:50.186 8043.055 - 8102.633: 1.0794% ( 17) 00:14:50.186 8102.633 - 8162.211: 1.2850% ( 20) 00:14:50.186 8162.211 - 8221.789: 1.5831% ( 29) 00:14:50.186 8221.789 - 8281.367: 1.9942% ( 40) 00:14:50.186 8281.367 - 8340.945: 2.4363% ( 43) 00:14:50.186 8340.945 - 8400.524: 2.9811% ( 53) 00:14:50.186 8400.524 - 8460.102: 3.5567% ( 56) 00:14:50.186 8460.102 - 8519.680: 4.2044% ( 63) 00:14:50.186 8519.680 - 8579.258: 4.9034% ( 68) 00:14:50.186 8579.258 - 8638.836: 5.6332% ( 71) 00:14:50.186 8638.836 - 8698.415: 6.3836% ( 73) 00:14:50.186 8698.415 - 8757.993: 7.1238% ( 72) 00:14:50.186 8757.993 - 8817.571: 7.8228% ( 68) 00:14:50.186 8817.571 - 8877.149: 8.5218% ( 68) 00:14:50.186 8877.149 - 8936.727: 9.1797% ( 64) 00:14:50.186 8936.727 - 8996.305: 9.8479% ( 65) 00:14:50.186 8996.305 - 9055.884: 10.4852% ( 62) 00:14:50.186 9055.884 - 9115.462: 11.1225% ( 62) 00:14:50.186 9115.462 - 9175.040: 11.7496% ( 61) 00:14:50.186 9175.040 - 9234.618: 12.3561% ( 59) 00:14:50.186 9234.618 - 9294.196: 12.9729% ( 60) 00:14:50.186 9294.196 - 9353.775: 13.5896% ( 60) 00:14:50.186 9353.775 - 9413.353: 14.1447% ( 54) 00:14:50.186 9413.353 - 9472.931: 14.6382% ( 48) 00:14:50.186 9472.931 - 9532.509: 15.0699% ( 42) 00:14:50.186 9532.509 - 9592.087: 15.5736% ( 49) 00:14:50.186 9592.087 - 9651.665: 16.0053% ( 42) 00:14:50.186 9651.665 - 9711.244: 16.3960% ( 38) 00:14:50.186 9711.244 - 9770.822: 16.8072% ( 40) 00:14:50.186 9770.822 - 9830.400: 17.3520% ( 53) 00:14:50.186 9830.400 - 9889.978: 18.1024% ( 73) 00:14:50.186 9889.978 - 9949.556: 19.0687% ( 94) 00:14:50.186 9949.556 - 10009.135: 20.1583% ( 106) 00:14:50.186 10009.135 - 10068.713: 21.3610% ( 117) 00:14:50.186 10068.713 - 10128.291: 22.5123% ( 112) 00:14:50.186 10128.291 - 10187.869: 23.8076% ( 126) 00:14:50.186 10187.869 - 10247.447: 25.2056% ( 136) 00:14:50.186 10247.447 - 10307.025: 26.6550% ( 141) 00:14:50.186 10307.025 - 10366.604: 28.0530% ( 136) 00:14:50.186 10366.604 - 10426.182: 29.5127% ( 142) 00:14:50.186 10426.182 - 10485.760: 30.9313% ( 138) 00:14:50.186 10485.760 - 10545.338: 32.3910% ( 142) 00:14:50.186 10545.338 - 10604.916: 33.8919% ( 146) 00:14:50.186 10604.916 - 10664.495: 35.4132% ( 148) 00:14:50.186 10664.495 - 10724.073: 36.8832% ( 143) 00:14:50.186 10724.073 - 10783.651: 38.2915% ( 137) 00:14:50.186 10783.651 - 10843.229: 39.5148% ( 119) 00:14:50.186 10843.229 - 10902.807: 40.7072% ( 116) 00:14:50.186 10902.807 - 10962.385: 41.6221% ( 89) 00:14:50.186 10962.385 - 11021.964: 42.3109% ( 67) 00:14:50.186 11021.964 - 11081.542: 42.9071% ( 58) 00:14:50.186 11081.542 - 11141.120: 43.4725% ( 55) 00:14:50.186 11141.120 - 11200.698: 44.1098% ( 62) 00:14:50.186 11200.698 - 11260.276: 44.8396% ( 71) 00:14:50.186 11260.276 - 11319.855: 45.6312% ( 77) 00:14:50.186 11319.855 - 11379.433: 46.5666% ( 91) 00:14:50.186 11379.433 - 11439.011: 47.4198% ( 83) 00:14:50.186 11439.011 - 11498.589: 48.2627% ( 82) 00:14:50.186 11498.589 - 11558.167: 49.0954% ( 81) 00:14:50.186 11558.167 - 11617.745: 49.8972% ( 78) 00:14:50.186 11617.745 - 11677.324: 50.6785% ( 76) 00:14:50.186 11677.324 - 11736.902: 51.4186% ( 72) 00:14:50.186 11736.902 - 11796.480: 52.1279% ( 69) 00:14:50.186 11796.480 - 11856.058: 52.8166% ( 67) 00:14:50.186 11856.058 - 11915.636: 53.4642% ( 63) 00:14:50.186 11915.636 - 11975.215: 54.1221% ( 64) 00:14:50.186 11975.215 - 12034.793: 54.8211% ( 68) 00:14:50.186 12034.793 - 12094.371: 55.4276% ( 59) 00:14:50.186 12094.371 - 12153.949: 56.1061% ( 66) 00:14:50.186 12153.949 - 12213.527: 56.7743% ( 65) 00:14:50.186 12213.527 - 12273.105: 57.4116% ( 62) 00:14:50.186 12273.105 - 12332.684: 58.0387% ( 61) 00:14:50.186 12332.684 - 12392.262: 58.6451% ( 59) 00:14:50.186 12392.262 - 12451.840: 59.2414% ( 58) 00:14:50.186 12451.840 - 12511.418: 59.8273% ( 57) 00:14:50.186 12511.418 - 12570.996: 60.4132% ( 57) 00:14:50.186 12570.996 - 12630.575: 61.0403% ( 61) 00:14:50.186 12630.575 - 12690.153: 61.6160% ( 56) 00:14:50.186 12690.153 - 12749.731: 62.2122% ( 58) 00:14:50.186 12749.731 - 12809.309: 62.8392% ( 61) 00:14:50.186 12809.309 - 12868.887: 63.3635% ( 51) 00:14:50.186 12868.887 - 12928.465: 63.7850% ( 41) 00:14:50.186 12928.465 - 12988.044: 64.0831% ( 29) 00:14:50.186 12988.044 - 13047.622: 64.2887% ( 20) 00:14:50.186 13047.622 - 13107.200: 64.4428% ( 15) 00:14:50.186 13107.200 - 13166.778: 64.5662% ( 12) 00:14:50.186 13166.778 - 13226.356: 64.7410% ( 17) 00:14:50.186 13226.356 - 13285.935: 64.8951% ( 15) 00:14:50.186 13285.935 - 13345.513: 65.0699% ( 17) 00:14:50.186 13345.513 - 13405.091: 65.2035% ( 13) 00:14:50.186 13405.091 - 13464.669: 65.3372% ( 13) 00:14:50.186 13464.669 - 13524.247: 65.4914% ( 15) 00:14:50.186 13524.247 - 13583.825: 65.6250% ( 13) 00:14:50.186 13583.825 - 13643.404: 65.7689% ( 14) 00:14:50.186 13643.404 - 13702.982: 65.9025% ( 13) 00:14:50.186 13702.982 - 13762.560: 66.0465% ( 14) 00:14:50.186 13762.560 - 13822.138: 66.1801% ( 13) 00:14:50.186 13822.138 - 13881.716: 66.3035% ( 12) 00:14:50.186 13881.716 - 13941.295: 66.4268% ( 12) 00:14:50.186 13941.295 - 14000.873: 66.5604% ( 13) 00:14:50.186 14000.873 - 14060.451: 66.6632% ( 10) 00:14:50.186 14060.451 - 14120.029: 66.7763% ( 11) 00:14:50.186 14120.029 - 14179.607: 66.8894% ( 11) 00:14:50.186 14179.607 - 14239.185: 67.0127% ( 12) 00:14:50.186 14239.185 - 14298.764: 67.1258% ( 11) 00:14:50.186 14298.764 - 14358.342: 67.2183% ( 9) 00:14:50.186 14358.342 - 14417.920: 67.3520% ( 13) 00:14:50.187 14417.920 - 14477.498: 67.5062% ( 15) 00:14:50.187 14477.498 - 14537.076: 67.6295% ( 12) 00:14:50.187 14537.076 - 14596.655: 67.8043% ( 17) 00:14:50.187 14596.655 - 14656.233: 67.8968% ( 9) 00:14:50.187 14656.233 - 14715.811: 68.0099% ( 11) 00:14:50.187 14715.811 - 14775.389: 68.1127% ( 10) 00:14:50.187 14775.389 - 14834.967: 68.1949% ( 8) 00:14:50.187 14834.967 - 14894.545: 68.2977% ( 10) 00:14:50.187 14894.545 - 14954.124: 68.4108% ( 11) 00:14:50.187 14954.124 - 15013.702: 68.5238% ( 11) 00:14:50.187 15013.702 - 15073.280: 68.6575% ( 13) 00:14:50.187 15073.280 - 15132.858: 68.7706% ( 11) 00:14:50.187 15132.858 - 15192.436: 68.8939% ( 12) 00:14:50.187 15192.436 - 15252.015: 69.0173% ( 12) 00:14:50.187 15252.015 - 15371.171: 69.2229% ( 20) 00:14:50.187 15371.171 - 15490.327: 69.4593% ( 23) 00:14:50.187 15490.327 - 15609.484: 69.7471% ( 28) 00:14:50.187 15609.484 - 15728.640: 70.0247% ( 27) 00:14:50.187 15728.640 - 15847.796: 70.3433% ( 31) 00:14:50.187 15847.796 - 15966.953: 70.6826% ( 33) 00:14:50.187 15966.953 - 16086.109: 71.0629% ( 37) 00:14:50.187 16086.109 - 16205.265: 71.4741% ( 40) 00:14:50.187 16205.265 - 16324.422: 71.9470% ( 46) 00:14:50.187 16324.422 - 16443.578: 72.4712% ( 51) 00:14:50.187 16443.578 - 16562.735: 73.2730% ( 78) 00:14:50.187 16562.735 - 16681.891: 74.1365% ( 84) 00:14:50.187 16681.891 - 16801.047: 75.1439% ( 98) 00:14:50.187 16801.047 - 16920.204: 76.1822% ( 101) 00:14:50.187 16920.204 - 17039.360: 77.1382% ( 93) 00:14:50.187 17039.360 - 17158.516: 78.0633% ( 90) 00:14:50.187 17158.516 - 17277.673: 78.9988% ( 91) 00:14:50.187 17277.673 - 17396.829: 79.9548% ( 93) 00:14:50.187 17396.829 - 17515.985: 80.9005% ( 92) 00:14:50.187 17515.985 - 17635.142: 81.9696% ( 104) 00:14:50.187 17635.142 - 17754.298: 82.9461% ( 95) 00:14:50.187 17754.298 - 17873.455: 83.9330% ( 96) 00:14:50.187 17873.455 - 17992.611: 84.8273% ( 87) 00:14:50.187 17992.611 - 18111.767: 85.7525% ( 90) 00:14:50.187 18111.767 - 18230.924: 86.7701% ( 99) 00:14:50.187 18230.924 - 18350.080: 87.8392% ( 104) 00:14:50.187 18350.080 - 18469.236: 88.8363% ( 97) 00:14:50.187 18469.236 - 18588.393: 89.8335% ( 97) 00:14:50.187 18588.393 - 18707.549: 90.7998% ( 94) 00:14:50.187 18707.549 - 18826.705: 91.6838% ( 86) 00:14:50.187 18826.705 - 18945.862: 92.5370% ( 83) 00:14:50.187 18945.862 - 19065.018: 93.2977% ( 74) 00:14:50.187 19065.018 - 19184.175: 94.0173% ( 70) 00:14:50.187 19184.175 - 19303.331: 94.6032% ( 57) 00:14:50.187 19303.331 - 19422.487: 94.9322% ( 32) 00:14:50.187 19422.487 - 19541.644: 95.2714% ( 33) 00:14:50.187 19541.644 - 19660.800: 95.5695% ( 29) 00:14:50.187 19660.800 - 19779.956: 95.8779% ( 30) 00:14:50.187 19779.956 - 19899.113: 96.1863% ( 30) 00:14:50.187 19899.113 - 20018.269: 96.4638% ( 27) 00:14:50.187 20018.269 - 20137.425: 96.7414% ( 27) 00:14:50.187 20137.425 - 20256.582: 97.0086% ( 26) 00:14:50.187 20256.582 - 20375.738: 97.2862% ( 27) 00:14:50.187 20375.738 - 20494.895: 97.5535% ( 26) 00:14:50.187 20494.895 - 20614.051: 97.8002% ( 24) 00:14:50.187 20614.051 - 20733.207: 98.0263% ( 22) 00:14:50.187 20733.207 - 20852.364: 98.2319% ( 20) 00:14:50.187 20852.364 - 20971.520: 98.3861% ( 15) 00:14:50.187 20971.520 - 21090.676: 98.5506% ( 16) 00:14:50.187 21090.676 - 21209.833: 98.6328% ( 8) 00:14:50.187 21209.833 - 21328.989: 98.6637% ( 3) 00:14:50.187 21328.989 - 21448.145: 98.6842% ( 2) 00:14:50.187 25141.993 - 25261.149: 98.7048% ( 2) 00:14:50.187 25261.149 - 25380.305: 98.7356% ( 3) 00:14:50.187 25380.305 - 25499.462: 98.7664% ( 3) 00:14:50.187 25499.462 - 25618.618: 98.8076% ( 4) 00:14:50.187 25618.618 - 25737.775: 98.8281% ( 2) 00:14:50.187 25737.775 - 25856.931: 98.8692% ( 4) 00:14:50.187 25856.931 - 25976.087: 98.9104% ( 4) 00:14:50.187 25976.087 - 26095.244: 98.9412% ( 3) 00:14:50.187 26095.244 - 26214.400: 98.9720% ( 3) 00:14:50.187 26214.400 - 26333.556: 99.0132% ( 4) 00:14:50.187 26333.556 - 26452.713: 99.0440% ( 3) 00:14:50.187 26452.713 - 26571.869: 99.0851% ( 4) 00:14:50.187 26571.869 - 26691.025: 99.1160% ( 3) 00:14:50.187 26691.025 - 26810.182: 99.1468% ( 3) 00:14:50.187 26810.182 - 26929.338: 99.1879% ( 4) 00:14:50.187 26929.338 - 27048.495: 99.2290% ( 4) 00:14:50.187 27048.495 - 27167.651: 99.2701% ( 4) 00:14:50.187 27167.651 - 27286.807: 99.3010% ( 3) 00:14:50.187 27286.807 - 27405.964: 99.3318% ( 3) 00:14:50.187 27405.964 - 27525.120: 99.3421% ( 1) 00:14:50.187 35508.596 - 35746.909: 99.3832% ( 4) 00:14:50.187 35746.909 - 35985.222: 99.4449% ( 6) 00:14:50.187 35985.222 - 36223.535: 99.4963% ( 5) 00:14:50.187 36223.535 - 36461.847: 99.5477% ( 5) 00:14:50.187 36461.847 - 36700.160: 99.6094% ( 6) 00:14:50.187 36700.160 - 36938.473: 99.6608% ( 5) 00:14:50.187 36938.473 - 37176.785: 99.7225% ( 6) 00:14:50.187 37176.785 - 37415.098: 99.7738% ( 5) 00:14:50.187 37415.098 - 37653.411: 99.8252% ( 5) 00:14:50.187 37653.411 - 37891.724: 99.8766% ( 5) 00:14:50.187 37891.724 - 38130.036: 99.9383% ( 6) 00:14:50.187 38130.036 - 38368.349: 100.0000% ( 6) 00:14:50.187 00:14:50.187 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:50.187 ============================================================================== 00:14:50.187 Range in us Cumulative IO count 00:14:50.187 7506.851 - 7536.640: 0.0103% ( 1) 00:14:50.187 7536.640 - 7566.429: 0.0206% ( 1) 00:14:50.187 7566.429 - 7596.218: 0.0308% ( 1) 00:14:50.187 7596.218 - 7626.007: 0.0514% ( 2) 00:14:50.187 7626.007 - 7685.585: 0.1028% ( 5) 00:14:50.187 7685.585 - 7745.164: 0.1542% ( 5) 00:14:50.187 7745.164 - 7804.742: 0.2570% ( 10) 00:14:50.187 7804.742 - 7864.320: 0.4112% ( 15) 00:14:50.187 7864.320 - 7923.898: 0.6065% ( 19) 00:14:50.187 7923.898 - 7983.476: 0.7812% ( 17) 00:14:50.187 7983.476 - 8043.055: 0.9663% ( 18) 00:14:50.187 8043.055 - 8102.633: 1.1616% ( 19) 00:14:50.187 8102.633 - 8162.211: 1.3672% ( 20) 00:14:50.187 8162.211 - 8221.789: 1.5831% ( 21) 00:14:50.187 8221.789 - 8281.367: 1.9223% ( 33) 00:14:50.187 8281.367 - 8340.945: 2.3026% ( 37) 00:14:50.187 8340.945 - 8400.524: 2.7961% ( 48) 00:14:50.187 8400.524 - 8460.102: 3.3923% ( 58) 00:14:50.187 8460.102 - 8519.680: 4.0193% ( 61) 00:14:50.187 8519.680 - 8579.258: 4.6978% ( 66) 00:14:50.187 8579.258 - 8638.836: 5.3762% ( 66) 00:14:50.187 8638.836 - 8698.415: 6.0650% ( 67) 00:14:50.187 8698.415 - 8757.993: 6.7948% ( 71) 00:14:50.187 8757.993 - 8817.571: 7.4424% ( 63) 00:14:50.187 8817.571 - 8877.149: 8.0592% ( 60) 00:14:50.187 8877.149 - 8936.727: 8.7171% ( 64) 00:14:50.187 8936.727 - 8996.305: 9.3544% ( 62) 00:14:50.187 8996.305 - 9055.884: 9.9815% ( 61) 00:14:50.187 9055.884 - 9115.462: 10.6086% ( 61) 00:14:50.187 9115.462 - 9175.040: 11.3076% ( 68) 00:14:50.187 9175.040 - 9234.618: 12.0066% ( 68) 00:14:50.187 9234.618 - 9294.196: 12.7159% ( 69) 00:14:50.187 9294.196 - 9353.775: 13.4252% ( 69) 00:14:50.187 9353.775 - 9413.353: 14.1139% ( 67) 00:14:50.187 9413.353 - 9472.931: 14.6896% ( 56) 00:14:50.187 9472.931 - 9532.509: 15.2035% ( 50) 00:14:50.187 9532.509 - 9592.087: 15.6970% ( 48) 00:14:50.187 9592.087 - 9651.665: 16.1081% ( 40) 00:14:50.187 9651.665 - 9711.244: 16.6221% ( 50) 00:14:50.187 9711.244 - 9770.822: 17.1669% ( 53) 00:14:50.187 9770.822 - 9830.400: 17.8557% ( 67) 00:14:50.187 9830.400 - 9889.978: 18.6164% ( 74) 00:14:50.187 9889.978 - 9949.556: 19.5826% ( 94) 00:14:50.187 9949.556 - 10009.135: 20.6826% ( 107) 00:14:50.187 10009.135 - 10068.713: 21.7928% ( 108) 00:14:50.187 10068.713 - 10128.291: 22.9544% ( 113) 00:14:50.187 10128.291 - 10187.869: 24.2188% ( 123) 00:14:50.187 10187.869 - 10247.447: 25.5448% ( 129) 00:14:50.187 10247.447 - 10307.025: 26.9017% ( 132) 00:14:50.187 10307.025 - 10366.604: 28.2484% ( 131) 00:14:50.187 10366.604 - 10426.182: 29.6053% ( 132) 00:14:50.187 10426.182 - 10485.760: 31.0136% ( 137) 00:14:50.187 10485.760 - 10545.338: 32.3910% ( 134) 00:14:50.187 10545.338 - 10604.916: 33.8610% ( 143) 00:14:50.187 10604.916 - 10664.495: 35.2899% ( 139) 00:14:50.187 10664.495 - 10724.073: 36.6879% ( 136) 00:14:50.187 10724.073 - 10783.651: 38.0859% ( 136) 00:14:50.187 10783.651 - 10843.229: 39.3914% ( 127) 00:14:50.187 10843.229 - 10902.807: 40.4811% ( 106) 00:14:50.187 10902.807 - 10962.385: 41.3754% ( 87) 00:14:50.187 10962.385 - 11021.964: 41.9922% ( 60) 00:14:50.187 11021.964 - 11081.542: 42.6192% ( 61) 00:14:50.187 11081.542 - 11141.120: 43.2052% ( 57) 00:14:50.187 11141.120 - 11200.698: 43.8322% ( 61) 00:14:50.187 11200.698 - 11260.276: 44.5518% ( 70) 00:14:50.187 11260.276 - 11319.855: 45.3947% ( 82) 00:14:50.187 11319.855 - 11379.433: 46.2479% ( 83) 00:14:50.187 11379.433 - 11439.011: 47.0806% ( 81) 00:14:50.187 11439.011 - 11498.589: 47.9852% ( 88) 00:14:50.187 11498.589 - 11558.167: 48.7973% ( 79) 00:14:50.187 11558.167 - 11617.745: 49.5785% ( 76) 00:14:50.187 11617.745 - 11677.324: 50.3289% ( 73) 00:14:50.187 11677.324 - 11736.902: 51.0691% ( 72) 00:14:50.187 11736.902 - 11796.480: 51.7784% ( 69) 00:14:50.187 11796.480 - 11856.058: 52.4465% ( 65) 00:14:50.187 11856.058 - 11915.636: 53.1661% ( 70) 00:14:50.188 11915.636 - 11975.215: 53.8549% ( 67) 00:14:50.188 11975.215 - 12034.793: 54.5436% ( 67) 00:14:50.188 12034.793 - 12094.371: 55.2118% ( 65) 00:14:50.188 12094.371 - 12153.949: 55.9416% ( 71) 00:14:50.188 12153.949 - 12213.527: 56.5687% ( 61) 00:14:50.188 12213.527 - 12273.105: 57.1957% ( 61) 00:14:50.188 12273.105 - 12332.684: 57.8639% ( 65) 00:14:50.188 12332.684 - 12392.262: 58.4704% ( 59) 00:14:50.188 12392.262 - 12451.840: 59.1591% ( 67) 00:14:50.188 12451.840 - 12511.418: 59.7965% ( 62) 00:14:50.188 12511.418 - 12570.996: 60.4338% ( 62) 00:14:50.188 12570.996 - 12630.575: 61.0609% ( 61) 00:14:50.188 12630.575 - 12690.153: 61.7393% ( 66) 00:14:50.188 12690.153 - 12749.731: 62.3972% ( 64) 00:14:50.188 12749.731 - 12809.309: 62.9934% ( 58) 00:14:50.188 12809.309 - 12868.887: 63.5588% ( 55) 00:14:50.188 12868.887 - 12928.465: 64.0728% ( 50) 00:14:50.188 12928.465 - 12988.044: 64.3400% ( 26) 00:14:50.188 12988.044 - 13047.622: 64.5765% ( 23) 00:14:50.188 13047.622 - 13107.200: 64.7512% ( 17) 00:14:50.188 13107.200 - 13166.778: 64.9260% ( 17) 00:14:50.188 13166.778 - 13226.356: 65.1110% ( 18) 00:14:50.188 13226.356 - 13285.935: 65.3063% ( 19) 00:14:50.188 13285.935 - 13345.513: 65.4708% ( 16) 00:14:50.188 13345.513 - 13405.091: 65.6764% ( 20) 00:14:50.188 13405.091 - 13464.669: 65.8306% ( 15) 00:14:50.188 13464.669 - 13524.247: 66.0156% ( 18) 00:14:50.188 13524.247 - 13583.825: 66.1595% ( 14) 00:14:50.188 13583.825 - 13643.404: 66.2726% ( 11) 00:14:50.188 13643.404 - 13702.982: 66.3960% ( 12) 00:14:50.188 13702.982 - 13762.560: 66.5090% ( 11) 00:14:50.188 13762.560 - 13822.138: 66.6632% ( 15) 00:14:50.188 13822.138 - 13881.716: 66.7969% ( 13) 00:14:50.188 13881.716 - 13941.295: 66.9100% ( 11) 00:14:50.188 13941.295 - 14000.873: 67.0127% ( 10) 00:14:50.188 14000.873 - 14060.451: 67.1258% ( 11) 00:14:50.188 14060.451 - 14120.029: 67.2389% ( 11) 00:14:50.188 14120.029 - 14179.607: 67.3417% ( 10) 00:14:50.188 14179.607 - 14239.185: 67.4137% ( 7) 00:14:50.188 14239.185 - 14298.764: 67.4959% ( 8) 00:14:50.188 14298.764 - 14358.342: 67.5781% ( 8) 00:14:50.188 14358.342 - 14417.920: 67.7015% ( 12) 00:14:50.188 14417.920 - 14477.498: 67.7940% ( 9) 00:14:50.188 14477.498 - 14537.076: 67.8968% ( 10) 00:14:50.188 14537.076 - 14596.655: 67.9893% ( 9) 00:14:50.188 14596.655 - 14656.233: 68.1127% ( 12) 00:14:50.188 14656.233 - 14715.811: 68.2052% ( 9) 00:14:50.188 14715.811 - 14775.389: 68.2463% ( 4) 00:14:50.188 14775.389 - 14834.967: 68.3080% ( 6) 00:14:50.188 14834.967 - 14894.545: 68.3799% ( 7) 00:14:50.188 14894.545 - 14954.124: 68.4416% ( 6) 00:14:50.188 14954.124 - 15013.702: 68.5238% ( 8) 00:14:50.188 15013.702 - 15073.280: 68.6061% ( 8) 00:14:50.188 15073.280 - 15132.858: 68.6986% ( 9) 00:14:50.188 15132.858 - 15192.436: 68.8117% ( 11) 00:14:50.188 15192.436 - 15252.015: 68.9248% ( 11) 00:14:50.188 15252.015 - 15371.171: 69.1612% ( 23) 00:14:50.188 15371.171 - 15490.327: 69.4285% ( 26) 00:14:50.188 15490.327 - 15609.484: 69.7368% ( 30) 00:14:50.188 15609.484 - 15728.640: 70.0350% ( 29) 00:14:50.188 15728.640 - 15847.796: 70.4564% ( 41) 00:14:50.188 15847.796 - 15966.953: 70.8573% ( 39) 00:14:50.188 15966.953 - 16086.109: 71.2685% ( 40) 00:14:50.188 16086.109 - 16205.265: 71.7414% ( 46) 00:14:50.188 16205.265 - 16324.422: 72.2759% ( 52) 00:14:50.188 16324.422 - 16443.578: 72.8824% ( 59) 00:14:50.188 16443.578 - 16562.735: 73.6739% ( 77) 00:14:50.188 16562.735 - 16681.891: 74.4243% ( 73) 00:14:50.188 16681.891 - 16801.047: 75.3084% ( 86) 00:14:50.188 16801.047 - 16920.204: 76.2541% ( 92) 00:14:50.188 16920.204 - 17039.360: 77.1484% ( 87) 00:14:50.188 17039.360 - 17158.516: 78.0222% ( 85) 00:14:50.188 17158.516 - 17277.673: 78.8137% ( 77) 00:14:50.188 17277.673 - 17396.829: 79.6464% ( 81) 00:14:50.188 17396.829 - 17515.985: 80.4790% ( 81) 00:14:50.188 17515.985 - 17635.142: 81.3939% ( 89) 00:14:50.188 17635.142 - 17754.298: 82.3499% ( 93) 00:14:50.188 17754.298 - 17873.455: 83.2854% ( 91) 00:14:50.188 17873.455 - 17992.611: 84.2002% ( 89) 00:14:50.188 17992.611 - 18111.767: 85.1460% ( 92) 00:14:50.188 18111.767 - 18230.924: 86.1328% ( 96) 00:14:50.188 18230.924 - 18350.080: 87.2738% ( 111) 00:14:50.188 18350.080 - 18469.236: 88.3121% ( 101) 00:14:50.188 18469.236 - 18588.393: 89.4017% ( 106) 00:14:50.188 18588.393 - 18707.549: 90.4605% ( 103) 00:14:50.188 18707.549 - 18826.705: 91.4988% ( 101) 00:14:50.188 18826.705 - 18945.862: 92.4034% ( 88) 00:14:50.188 18945.862 - 19065.018: 93.2566% ( 83) 00:14:50.188 19065.018 - 19184.175: 94.0687% ( 79) 00:14:50.188 19184.175 - 19303.331: 94.7574% ( 67) 00:14:50.188 19303.331 - 19422.487: 95.3125% ( 54) 00:14:50.188 19422.487 - 19541.644: 95.7956% ( 47) 00:14:50.188 19541.644 - 19660.800: 96.1657% ( 36) 00:14:50.188 19660.800 - 19779.956: 96.4638% ( 29) 00:14:50.188 19779.956 - 19899.113: 96.8236% ( 35) 00:14:50.188 19899.113 - 20018.269: 97.1628% ( 33) 00:14:50.188 20018.269 - 20137.425: 97.4198% ( 25) 00:14:50.188 20137.425 - 20256.582: 97.6357% ( 21) 00:14:50.188 20256.582 - 20375.738: 97.8413% ( 20) 00:14:50.188 20375.738 - 20494.895: 98.0366% ( 19) 00:14:50.188 20494.895 - 20614.051: 98.2011% ( 16) 00:14:50.188 20614.051 - 20733.207: 98.3553% ( 15) 00:14:50.188 20733.207 - 20852.364: 98.4889% ( 13) 00:14:50.188 20852.364 - 20971.520: 98.5814% ( 9) 00:14:50.188 20971.520 - 21090.676: 98.6431% ( 6) 00:14:50.188 21090.676 - 21209.833: 98.6739% ( 3) 00:14:50.188 23712.116 - 23831.273: 98.6945% ( 2) 00:14:50.188 23831.273 - 23950.429: 98.7150% ( 2) 00:14:50.188 23950.429 - 24069.585: 98.7459% ( 3) 00:14:50.188 24069.585 - 24188.742: 98.7664% ( 2) 00:14:50.188 24188.742 - 24307.898: 98.7973% ( 3) 00:14:50.188 24307.898 - 24427.055: 98.8178% ( 2) 00:14:50.188 24427.055 - 24546.211: 98.8487% ( 3) 00:14:50.188 24546.211 - 24665.367: 98.8692% ( 2) 00:14:50.188 24665.367 - 24784.524: 98.9001% ( 3) 00:14:50.188 24784.524 - 24903.680: 98.9309% ( 3) 00:14:50.188 24903.680 - 25022.836: 98.9618% ( 3) 00:14:50.188 25022.836 - 25141.993: 98.9926% ( 3) 00:14:50.188 25141.993 - 25261.149: 99.0132% ( 2) 00:14:50.188 25261.149 - 25380.305: 99.0440% ( 3) 00:14:50.188 25380.305 - 25499.462: 99.0748% ( 3) 00:14:50.188 25499.462 - 25618.618: 99.0954% ( 2) 00:14:50.188 25618.618 - 25737.775: 99.1262% ( 3) 00:14:50.188 25737.775 - 25856.931: 99.1468% ( 2) 00:14:50.188 25856.931 - 25976.087: 99.1879% ( 4) 00:14:50.188 25976.087 - 26095.244: 99.2085% ( 2) 00:14:50.188 26095.244 - 26214.400: 99.2393% ( 3) 00:14:50.188 26214.400 - 26333.556: 99.2701% ( 3) 00:14:50.188 26333.556 - 26452.713: 99.3010% ( 3) 00:14:50.188 26452.713 - 26571.869: 99.3215% ( 2) 00:14:50.188 26571.869 - 26691.025: 99.3421% ( 2) 00:14:50.188 34078.720 - 34317.033: 99.3524% ( 1) 00:14:50.188 34317.033 - 34555.345: 99.4038% ( 5) 00:14:50.188 34555.345 - 34793.658: 99.4655% ( 6) 00:14:50.188 34793.658 - 35031.971: 99.5169% ( 5) 00:14:50.188 35031.971 - 35270.284: 99.5683% ( 5) 00:14:50.188 35270.284 - 35508.596: 99.6299% ( 6) 00:14:50.188 35508.596 - 35746.909: 99.6813% ( 5) 00:14:50.188 35746.909 - 35985.222: 99.7327% ( 5) 00:14:50.188 35985.222 - 36223.535: 99.7841% ( 5) 00:14:50.188 36223.535 - 36461.847: 99.8458% ( 6) 00:14:50.188 36461.847 - 36700.160: 99.8972% ( 5) 00:14:50.188 36700.160 - 36938.473: 99.9486% ( 5) 00:14:50.188 36938.473 - 37176.785: 100.0000% ( 5) 00:14:50.188 00:14:50.188 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:50.188 ============================================================================== 00:14:50.188 Range in us Cumulative IO count 00:14:50.188 7566.429 - 7596.218: 0.0206% ( 2) 00:14:50.188 7596.218 - 7626.007: 0.0411% ( 2) 00:14:50.188 7626.007 - 7685.585: 0.1028% ( 6) 00:14:50.188 7685.585 - 7745.164: 0.1850% ( 8) 00:14:50.188 7745.164 - 7804.742: 0.2775% ( 9) 00:14:50.188 7804.742 - 7864.320: 0.3906% ( 11) 00:14:50.188 7864.320 - 7923.898: 0.5551% ( 16) 00:14:50.188 7923.898 - 7983.476: 0.7196% ( 16) 00:14:50.188 7983.476 - 8043.055: 0.9354% ( 21) 00:14:50.188 8043.055 - 8102.633: 1.1102% ( 17) 00:14:50.188 8102.633 - 8162.211: 1.2850% ( 17) 00:14:50.188 8162.211 - 8221.789: 1.5214% ( 23) 00:14:50.188 8221.789 - 8281.367: 1.8503% ( 32) 00:14:50.188 8281.367 - 8340.945: 2.2410% ( 38) 00:14:50.188 8340.945 - 8400.524: 2.7549% ( 50) 00:14:50.188 8400.524 - 8460.102: 3.3100% ( 54) 00:14:50.189 8460.102 - 8519.680: 3.9371% ( 61) 00:14:50.189 8519.680 - 8579.258: 4.6258% ( 67) 00:14:50.189 8579.258 - 8638.836: 5.3351% ( 69) 00:14:50.189 8638.836 - 8698.415: 6.0136% ( 66) 00:14:50.189 8698.415 - 8757.993: 6.6303% ( 60) 00:14:50.189 8757.993 - 8817.571: 7.3191% ( 67) 00:14:50.189 8817.571 - 8877.149: 7.9770% ( 64) 00:14:50.189 8877.149 - 8936.727: 8.7171% ( 72) 00:14:50.189 8936.727 - 8996.305: 9.4264% ( 69) 00:14:50.189 8996.305 - 9055.884: 10.1151% ( 67) 00:14:50.189 9055.884 - 9115.462: 10.7833% ( 65) 00:14:50.189 9115.462 - 9175.040: 11.4618% ( 66) 00:14:50.189 9175.040 - 9234.618: 12.1505% ( 67) 00:14:50.189 9234.618 - 9294.196: 12.8392% ( 67) 00:14:50.189 9294.196 - 9353.775: 13.5485% ( 69) 00:14:50.189 9353.775 - 9413.353: 14.2064% ( 64) 00:14:50.189 9413.353 - 9472.931: 14.7615% ( 54) 00:14:50.189 9472.931 - 9532.509: 15.2652% ( 49) 00:14:50.189 9532.509 - 9592.087: 15.7175% ( 44) 00:14:50.189 9592.087 - 9651.665: 16.1698% ( 44) 00:14:50.189 9651.665 - 9711.244: 16.6427% ( 46) 00:14:50.189 9711.244 - 9770.822: 17.2286% ( 57) 00:14:50.189 9770.822 - 9830.400: 17.8660% ( 62) 00:14:50.189 9830.400 - 9889.978: 18.6575% ( 77) 00:14:50.189 9889.978 - 9949.556: 19.6032% ( 92) 00:14:50.189 9949.556 - 10009.135: 20.5387% ( 91) 00:14:50.189 10009.135 - 10068.713: 21.7311% ( 116) 00:14:50.189 10068.713 - 10128.291: 22.8927% ( 113) 00:14:50.189 10128.291 - 10187.869: 24.1057% ( 118) 00:14:50.189 10187.869 - 10247.447: 25.5037% ( 136) 00:14:50.189 10247.447 - 10307.025: 26.8400% ( 130) 00:14:50.189 10307.025 - 10366.604: 28.2689% ( 139) 00:14:50.189 10366.604 - 10426.182: 29.6361% ( 133) 00:14:50.189 10426.182 - 10485.760: 30.9827% ( 131) 00:14:50.189 10485.760 - 10545.338: 32.4013% ( 138) 00:14:50.189 10545.338 - 10604.916: 33.7685% ( 133) 00:14:50.189 10604.916 - 10664.495: 35.1665% ( 136) 00:14:50.189 10664.495 - 10724.073: 36.6571% ( 145) 00:14:50.189 10724.073 - 10783.651: 37.9831% ( 129) 00:14:50.189 10783.651 - 10843.229: 39.2887% ( 127) 00:14:50.189 10843.229 - 10902.807: 40.2755% ( 96) 00:14:50.189 10902.807 - 10962.385: 41.1390% ( 84) 00:14:50.189 10962.385 - 11021.964: 41.7660% ( 61) 00:14:50.189 11021.964 - 11081.542: 42.4034% ( 62) 00:14:50.189 11081.542 - 11141.120: 43.0715% ( 65) 00:14:50.189 11141.120 - 11200.698: 43.7500% ( 66) 00:14:50.189 11200.698 - 11260.276: 44.5107% ( 74) 00:14:50.189 11260.276 - 11319.855: 45.2817% ( 75) 00:14:50.189 11319.855 - 11379.433: 46.1554% ( 85) 00:14:50.189 11379.433 - 11439.011: 47.0395% ( 86) 00:14:50.189 11439.011 - 11498.589: 47.8927% ( 83) 00:14:50.189 11498.589 - 11558.167: 48.7356% ( 82) 00:14:50.189 11558.167 - 11617.745: 49.5169% ( 76) 00:14:50.189 11617.745 - 11677.324: 50.2775% ( 74) 00:14:50.189 11677.324 - 11736.902: 51.0177% ( 72) 00:14:50.189 11736.902 - 11796.480: 51.6961% ( 66) 00:14:50.189 11796.480 - 11856.058: 52.3951% ( 68) 00:14:50.189 11856.058 - 11915.636: 53.0633% ( 65) 00:14:50.189 11915.636 - 11975.215: 53.7521% ( 67) 00:14:50.189 11975.215 - 12034.793: 54.4202% ( 65) 00:14:50.189 12034.793 - 12094.371: 55.1295% ( 69) 00:14:50.189 12094.371 - 12153.949: 55.8285% ( 68) 00:14:50.189 12153.949 - 12213.527: 56.5173% ( 67) 00:14:50.189 12213.527 - 12273.105: 57.1752% ( 64) 00:14:50.189 12273.105 - 12332.684: 57.8433% ( 65) 00:14:50.189 12332.684 - 12392.262: 58.4704% ( 61) 00:14:50.189 12392.262 - 12451.840: 59.1180% ( 63) 00:14:50.189 12451.840 - 12511.418: 59.8067% ( 67) 00:14:50.189 12511.418 - 12570.996: 60.4338% ( 61) 00:14:50.189 12570.996 - 12630.575: 61.0197% ( 57) 00:14:50.189 12630.575 - 12690.153: 61.6468% ( 61) 00:14:50.189 12690.153 - 12749.731: 62.3355% ( 67) 00:14:50.189 12749.731 - 12809.309: 62.9215% ( 57) 00:14:50.189 12809.309 - 12868.887: 63.5177% ( 58) 00:14:50.189 12868.887 - 12928.465: 64.0317% ( 50) 00:14:50.189 12928.465 - 12988.044: 64.3503% ( 31) 00:14:50.189 12988.044 - 13047.622: 64.5662% ( 21) 00:14:50.189 13047.622 - 13107.200: 64.7615% ( 19) 00:14:50.189 13107.200 - 13166.778: 64.9260% ( 16) 00:14:50.189 13166.778 - 13226.356: 65.1110% ( 18) 00:14:50.189 13226.356 - 13285.935: 65.2755% ( 16) 00:14:50.189 13285.935 - 13345.513: 65.4297% ( 15) 00:14:50.189 13345.513 - 13405.091: 65.6250% ( 19) 00:14:50.189 13405.091 - 13464.669: 65.7792% ( 15) 00:14:50.189 13464.669 - 13524.247: 65.9025% ( 12) 00:14:50.189 13524.247 - 13583.825: 66.0465% ( 14) 00:14:50.189 13583.825 - 13643.404: 66.1801% ( 13) 00:14:50.189 13643.404 - 13702.982: 66.3137% ( 13) 00:14:50.189 13702.982 - 13762.560: 66.4576% ( 14) 00:14:50.189 13762.560 - 13822.138: 66.5707% ( 11) 00:14:50.189 13822.138 - 13881.716: 66.6941% ( 12) 00:14:50.189 13881.716 - 13941.295: 66.8072% ( 11) 00:14:50.189 13941.295 - 14000.873: 66.8997% ( 9) 00:14:50.189 14000.873 - 14060.451: 66.9716% ( 7) 00:14:50.189 14060.451 - 14120.029: 67.0333% ( 6) 00:14:50.189 14120.029 - 14179.607: 67.0950% ( 6) 00:14:50.189 14179.607 - 14239.185: 67.1669% ( 7) 00:14:50.189 14239.185 - 14298.764: 67.2286% ( 6) 00:14:50.189 14298.764 - 14358.342: 67.3109% ( 8) 00:14:50.189 14358.342 - 14417.920: 67.3828% ( 7) 00:14:50.189 14417.920 - 14477.498: 67.4856% ( 10) 00:14:50.189 14477.498 - 14537.076: 67.5781% ( 9) 00:14:50.189 14537.076 - 14596.655: 67.6706% ( 9) 00:14:50.189 14596.655 - 14656.233: 67.7632% ( 9) 00:14:50.189 14656.233 - 14715.811: 67.8865% ( 12) 00:14:50.189 14715.811 - 14775.389: 67.9996% ( 11) 00:14:50.189 14775.389 - 14834.967: 68.0818% ( 8) 00:14:50.189 14834.967 - 14894.545: 68.1949% ( 11) 00:14:50.189 14894.545 - 14954.124: 68.2771% ( 8) 00:14:50.189 14954.124 - 15013.702: 68.3902% ( 11) 00:14:50.189 15013.702 - 15073.280: 68.4827% ( 9) 00:14:50.189 15073.280 - 15132.858: 68.5958% ( 11) 00:14:50.189 15132.858 - 15192.436: 68.6986% ( 10) 00:14:50.189 15192.436 - 15252.015: 68.8014% ( 10) 00:14:50.189 15252.015 - 15371.171: 69.0173% ( 21) 00:14:50.189 15371.171 - 15490.327: 69.2229% ( 20) 00:14:50.189 15490.327 - 15609.484: 69.5621% ( 33) 00:14:50.189 15609.484 - 15728.640: 69.9322% ( 36) 00:14:50.189 15728.640 - 15847.796: 70.3433% ( 40) 00:14:50.189 15847.796 - 15966.953: 70.8059% ( 45) 00:14:50.189 15966.953 - 16086.109: 71.2274% ( 41) 00:14:50.189 16086.109 - 16205.265: 71.7311% ( 49) 00:14:50.189 16205.265 - 16324.422: 72.2553% ( 51) 00:14:50.189 16324.422 - 16443.578: 72.9235% ( 65) 00:14:50.189 16443.578 - 16562.735: 73.7253% ( 78) 00:14:50.189 16562.735 - 16681.891: 74.5066% ( 76) 00:14:50.189 16681.891 - 16801.047: 75.3803% ( 85) 00:14:50.189 16801.047 - 16920.204: 76.2850% ( 88) 00:14:50.189 16920.204 - 17039.360: 77.1793% ( 87) 00:14:50.189 17039.360 - 17158.516: 78.0633% ( 86) 00:14:50.189 17158.516 - 17277.673: 78.9679% ( 88) 00:14:50.189 17277.673 - 17396.829: 79.8931% ( 90) 00:14:50.189 17396.829 - 17515.985: 80.8183% ( 90) 00:14:50.189 17515.985 - 17635.142: 81.6817% ( 84) 00:14:50.189 17635.142 - 17754.298: 82.5761% ( 87) 00:14:50.189 17754.298 - 17873.455: 83.5732% ( 97) 00:14:50.189 17873.455 - 17992.611: 84.5292% ( 93) 00:14:50.189 17992.611 - 18111.767: 85.5160% ( 96) 00:14:50.189 18111.767 - 18230.924: 86.5337% ( 99) 00:14:50.189 18230.924 - 18350.080: 87.5411% ( 98) 00:14:50.189 18350.080 - 18469.236: 88.5999% ( 103) 00:14:50.189 18469.236 - 18588.393: 89.6690% ( 104) 00:14:50.189 18588.393 - 18707.549: 90.7381% ( 104) 00:14:50.189 18707.549 - 18826.705: 91.7146% ( 95) 00:14:50.189 18826.705 - 18945.862: 92.6809% ( 94) 00:14:50.189 18945.862 - 19065.018: 93.5444% ( 84) 00:14:50.189 19065.018 - 19184.175: 94.3257% ( 76) 00:14:50.189 19184.175 - 19303.331: 94.9527% ( 61) 00:14:50.189 19303.331 - 19422.487: 95.4975% ( 53) 00:14:50.189 19422.487 - 19541.644: 95.9498% ( 44) 00:14:50.189 19541.644 - 19660.800: 96.2891% ( 33) 00:14:50.189 19660.800 - 19779.956: 96.6180% ( 32) 00:14:50.189 19779.956 - 19899.113: 96.9161% ( 29) 00:14:50.189 19899.113 - 20018.269: 97.2039% ( 28) 00:14:50.189 20018.269 - 20137.425: 97.4815% ( 27) 00:14:50.189 20137.425 - 20256.582: 97.7590% ( 27) 00:14:50.189 20256.582 - 20375.738: 97.9544% ( 19) 00:14:50.189 20375.738 - 20494.895: 98.1291% ( 17) 00:14:50.189 20494.895 - 20614.051: 98.2936% ( 16) 00:14:50.190 20614.051 - 20733.207: 98.4375% ( 14) 00:14:50.190 20733.207 - 20852.364: 98.5403% ( 10) 00:14:50.190 20852.364 - 20971.520: 98.6225% ( 8) 00:14:50.190 20971.520 - 21090.676: 98.6637% ( 4) 00:14:50.190 21090.676 - 21209.833: 98.6842% ( 2) 00:14:50.190 22282.240 - 22401.396: 98.7048% ( 2) 00:14:50.190 22401.396 - 22520.553: 98.7356% ( 3) 00:14:50.190 22520.553 - 22639.709: 98.7562% ( 2) 00:14:50.190 22639.709 - 22758.865: 98.7870% ( 3) 00:14:50.190 22758.865 - 22878.022: 98.8076% ( 2) 00:14:50.190 22878.022 - 22997.178: 98.8384% ( 3) 00:14:50.190 22997.178 - 23116.335: 98.8692% ( 3) 00:14:50.190 23116.335 - 23235.491: 98.8898% ( 2) 00:14:50.190 23235.491 - 23354.647: 98.9206% ( 3) 00:14:50.190 23354.647 - 23473.804: 98.9515% ( 3) 00:14:50.190 23473.804 - 23592.960: 98.9823% ( 3) 00:14:50.190 23592.960 - 23712.116: 99.0029% ( 2) 00:14:50.190 23712.116 - 23831.273: 99.0337% ( 3) 00:14:50.190 23831.273 - 23950.429: 99.0646% ( 3) 00:14:50.190 23950.429 - 24069.585: 99.0851% ( 2) 00:14:50.190 24069.585 - 24188.742: 99.1160% ( 3) 00:14:50.190 24188.742 - 24307.898: 99.1468% ( 3) 00:14:50.190 24307.898 - 24427.055: 99.1674% ( 2) 00:14:50.190 24427.055 - 24546.211: 99.1879% ( 2) 00:14:50.190 24546.211 - 24665.367: 99.2188% ( 3) 00:14:50.190 24665.367 - 24784.524: 99.2393% ( 2) 00:14:50.190 24784.524 - 24903.680: 99.2701% ( 3) 00:14:50.190 24903.680 - 25022.836: 99.3010% ( 3) 00:14:50.190 25022.836 - 25141.993: 99.3318% ( 3) 00:14:50.190 25141.993 - 25261.149: 99.3421% ( 1) 00:14:50.190 32648.844 - 32887.156: 99.3627% ( 2) 00:14:50.190 32887.156 - 33125.469: 99.4141% ( 5) 00:14:50.190 33125.469 - 33363.782: 99.4757% ( 6) 00:14:50.190 33363.782 - 33602.095: 99.5374% ( 6) 00:14:50.190 33602.095 - 33840.407: 99.5888% ( 5) 00:14:50.190 33840.407 - 34078.720: 99.6402% ( 5) 00:14:50.190 34078.720 - 34317.033: 99.6916% ( 5) 00:14:50.190 34317.033 - 34555.345: 99.7430% ( 5) 00:14:50.190 34555.345 - 34793.658: 99.7944% ( 5) 00:14:50.190 34793.658 - 35031.971: 99.8458% ( 5) 00:14:50.190 35031.971 - 35270.284: 99.8972% ( 5) 00:14:50.190 35270.284 - 35508.596: 99.9589% ( 6) 00:14:50.190 35508.596 - 35746.909: 100.0000% ( 4) 00:14:50.190 00:14:50.190 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:50.190 ============================================================================== 00:14:50.190 Range in us Cumulative IO count 00:14:50.190 7566.429 - 7596.218: 0.0206% ( 2) 00:14:50.190 7596.218 - 7626.007: 0.0411% ( 2) 00:14:50.190 7626.007 - 7685.585: 0.1028% ( 6) 00:14:50.190 7685.585 - 7745.164: 0.1953% ( 9) 00:14:50.190 7745.164 - 7804.742: 0.2673% ( 7) 00:14:50.190 7804.742 - 7864.320: 0.3701% ( 10) 00:14:50.190 7864.320 - 7923.898: 0.4729% ( 10) 00:14:50.190 7923.898 - 7983.476: 0.5757% ( 10) 00:14:50.190 7983.476 - 8043.055: 0.7710% ( 19) 00:14:50.190 8043.055 - 8102.633: 0.9457% ( 17) 00:14:50.190 8102.633 - 8162.211: 1.1410% ( 19) 00:14:50.190 8162.211 - 8221.789: 1.4083% ( 26) 00:14:50.190 8221.789 - 8281.367: 1.7475% ( 33) 00:14:50.190 8281.367 - 8340.945: 2.1793% ( 42) 00:14:50.190 8340.945 - 8400.524: 2.6624% ( 47) 00:14:50.190 8400.524 - 8460.102: 3.3409% ( 66) 00:14:50.190 8460.102 - 8519.680: 4.0296% ( 67) 00:14:50.190 8519.680 - 8579.258: 4.8314% ( 78) 00:14:50.190 8579.258 - 8638.836: 5.6229% ( 77) 00:14:50.190 8638.836 - 8698.415: 6.3528% ( 71) 00:14:50.190 8698.415 - 8757.993: 7.0621% ( 69) 00:14:50.190 8757.993 - 8817.571: 7.7611% ( 68) 00:14:50.190 8817.571 - 8877.149: 8.4910% ( 71) 00:14:50.190 8877.149 - 8936.727: 9.2311% ( 72) 00:14:50.190 8936.727 - 8996.305: 9.9712% ( 72) 00:14:50.190 8996.305 - 9055.884: 10.6702% ( 68) 00:14:50.190 9055.884 - 9115.462: 11.3692% ( 68) 00:14:50.190 9115.462 - 9175.040: 12.0683% ( 68) 00:14:50.190 9175.040 - 9234.618: 12.7159% ( 63) 00:14:50.190 9234.618 - 9294.196: 13.4354% ( 70) 00:14:50.190 9294.196 - 9353.775: 14.1653% ( 71) 00:14:50.190 9353.775 - 9413.353: 14.8335% ( 65) 00:14:50.190 9413.353 - 9472.931: 15.3783% ( 53) 00:14:50.190 9472.931 - 9532.509: 15.8409% ( 45) 00:14:50.190 9532.509 - 9592.087: 16.2315% ( 38) 00:14:50.190 9592.087 - 9651.665: 16.5707% ( 33) 00:14:50.190 9651.665 - 9711.244: 16.9408% ( 36) 00:14:50.190 9711.244 - 9770.822: 17.3931% ( 44) 00:14:50.190 9770.822 - 9830.400: 17.9585% ( 55) 00:14:50.190 9830.400 - 9889.978: 18.6266% ( 65) 00:14:50.190 9889.978 - 9949.556: 19.5312% ( 88) 00:14:50.190 9949.556 - 10009.135: 20.6414% ( 108) 00:14:50.190 10009.135 - 10068.713: 21.8544% ( 118) 00:14:50.190 10068.713 - 10128.291: 22.9646% ( 108) 00:14:50.190 10128.291 - 10187.869: 24.1160% ( 112) 00:14:50.190 10187.869 - 10247.447: 25.3289% ( 118) 00:14:50.190 10247.447 - 10307.025: 26.6345% ( 127) 00:14:50.190 10307.025 - 10366.604: 27.9400% ( 127) 00:14:50.190 10366.604 - 10426.182: 29.2146% ( 124) 00:14:50.190 10426.182 - 10485.760: 30.5613% ( 131) 00:14:50.190 10485.760 - 10545.338: 31.9490% ( 135) 00:14:50.190 10545.338 - 10604.916: 33.4087% ( 142) 00:14:50.190 10604.916 - 10664.495: 34.8581% ( 141) 00:14:50.190 10664.495 - 10724.073: 36.2664% ( 137) 00:14:50.190 10724.073 - 10783.651: 37.5822% ( 128) 00:14:50.190 10783.651 - 10843.229: 38.8158% ( 120) 00:14:50.190 10843.229 - 10902.807: 39.9054% ( 106) 00:14:50.190 10902.807 - 10962.385: 40.8306% ( 90) 00:14:50.190 10962.385 - 11021.964: 41.4371% ( 59) 00:14:50.190 11021.964 - 11081.542: 41.8997% ( 45) 00:14:50.190 11081.542 - 11141.120: 42.4856% ( 57) 00:14:50.190 11141.120 - 11200.698: 43.0921% ( 59) 00:14:50.190 11200.698 - 11260.276: 43.8734% ( 76) 00:14:50.190 11260.276 - 11319.855: 44.6957% ( 80) 00:14:50.190 11319.855 - 11379.433: 45.6312% ( 91) 00:14:50.190 11379.433 - 11439.011: 46.5049% ( 85) 00:14:50.190 11439.011 - 11498.589: 47.4301% ( 90) 00:14:50.190 11498.589 - 11558.167: 48.3141% ( 86) 00:14:50.190 11558.167 - 11617.745: 49.1468% ( 81) 00:14:50.190 11617.745 - 11677.324: 49.9383% ( 77) 00:14:50.190 11677.324 - 11736.902: 50.6682% ( 71) 00:14:50.190 11736.902 - 11796.480: 51.3569% ( 67) 00:14:50.190 11796.480 - 11856.058: 52.0970% ( 72) 00:14:50.190 11856.058 - 11915.636: 52.8063% ( 69) 00:14:50.190 11915.636 - 11975.215: 53.5465% ( 72) 00:14:50.190 11975.215 - 12034.793: 54.2455% ( 68) 00:14:50.190 12034.793 - 12094.371: 54.9753% ( 71) 00:14:50.190 12094.371 - 12153.949: 55.6743% ( 68) 00:14:50.190 12153.949 - 12213.527: 56.4145% ( 72) 00:14:50.190 12213.527 - 12273.105: 57.0929% ( 66) 00:14:50.190 12273.105 - 12332.684: 57.7714% ( 66) 00:14:50.190 12332.684 - 12392.262: 58.4293% ( 64) 00:14:50.190 12392.262 - 12451.840: 59.1386% ( 69) 00:14:50.190 12451.840 - 12511.418: 59.8581% ( 70) 00:14:50.190 12511.418 - 12570.996: 60.5777% ( 70) 00:14:50.190 12570.996 - 12630.575: 61.2870% ( 69) 00:14:50.190 12630.575 - 12690.153: 61.9346% ( 63) 00:14:50.190 12690.153 - 12749.731: 62.5617% ( 61) 00:14:50.190 12749.731 - 12809.309: 63.1990% ( 62) 00:14:50.190 12809.309 - 12868.887: 63.7233% ( 51) 00:14:50.190 12868.887 - 12928.465: 64.1961% ( 46) 00:14:50.190 12928.465 - 12988.044: 64.5354% ( 33) 00:14:50.190 12988.044 - 13047.622: 64.7410% ( 20) 00:14:50.190 13047.622 - 13107.200: 64.8849% ( 14) 00:14:50.190 13107.200 - 13166.778: 65.0288% ( 14) 00:14:50.190 13166.778 - 13226.356: 65.1624% ( 13) 00:14:50.190 13226.356 - 13285.935: 65.2858% ( 12) 00:14:50.190 13285.935 - 13345.513: 65.3783% ( 9) 00:14:50.190 13345.513 - 13405.091: 65.5222% ( 14) 00:14:50.190 13405.091 - 13464.669: 65.6867% ( 16) 00:14:50.190 13464.669 - 13524.247: 65.8306% ( 14) 00:14:50.190 13524.247 - 13583.825: 65.9539% ( 12) 00:14:50.190 13583.825 - 13643.404: 66.0979% ( 14) 00:14:50.190 13643.404 - 13702.982: 66.2109% ( 11) 00:14:50.190 13702.982 - 13762.560: 66.3446% ( 13) 00:14:50.190 13762.560 - 13822.138: 66.4782% ( 13) 00:14:50.190 13822.138 - 13881.716: 66.5707% ( 9) 00:14:50.190 13881.716 - 13941.295: 66.6735% ( 10) 00:14:50.190 13941.295 - 14000.873: 66.7558% ( 8) 00:14:50.190 14000.873 - 14060.451: 66.8277% ( 7) 00:14:50.190 14060.451 - 14120.029: 66.8997% ( 7) 00:14:50.190 14120.029 - 14179.607: 66.9819% ( 8) 00:14:50.190 14179.607 - 14239.185: 67.0847% ( 10) 00:14:50.190 14239.185 - 14298.764: 67.1567% ( 7) 00:14:50.190 14298.764 - 14358.342: 67.2389% ( 8) 00:14:50.190 14358.342 - 14417.920: 67.3211% ( 8) 00:14:50.190 14417.920 - 14477.498: 67.4137% ( 9) 00:14:50.190 14477.498 - 14537.076: 67.5062% ( 9) 00:14:50.190 14537.076 - 14596.655: 67.5987% ( 9) 00:14:50.190 14596.655 - 14656.233: 67.6809% ( 8) 00:14:50.190 14656.233 - 14715.811: 67.7734% ( 9) 00:14:50.190 14715.811 - 14775.389: 67.8865% ( 11) 00:14:50.190 14775.389 - 14834.967: 68.0304% ( 14) 00:14:50.190 14834.967 - 14894.545: 68.1332% ( 10) 00:14:50.190 14894.545 - 14954.124: 68.2463% ( 11) 00:14:50.190 14954.124 - 15013.702: 68.3799% ( 13) 00:14:50.190 15013.702 - 15073.280: 68.5341% ( 15) 00:14:50.190 15073.280 - 15132.858: 68.6472% ( 11) 00:14:50.190 15132.858 - 15192.436: 68.8014% ( 15) 00:14:50.190 15192.436 - 15252.015: 68.9762% ( 17) 00:14:50.190 15252.015 - 15371.171: 69.3257% ( 34) 00:14:50.190 15371.171 - 15490.327: 69.7163% ( 38) 00:14:50.190 15490.327 - 15609.484: 70.0863% ( 36) 00:14:50.190 15609.484 - 15728.640: 70.4564% ( 36) 00:14:50.190 15728.640 - 15847.796: 70.8368% ( 37) 00:14:50.190 15847.796 - 15966.953: 71.2479% ( 40) 00:14:50.190 15966.953 - 16086.109: 71.7311% ( 47) 00:14:50.190 16086.109 - 16205.265: 72.2656% ( 52) 00:14:50.190 16205.265 - 16324.422: 72.8516% ( 57) 00:14:50.190 16324.422 - 16443.578: 73.5300% ( 66) 00:14:50.190 16443.578 - 16562.735: 74.3113% ( 76) 00:14:50.191 16562.735 - 16681.891: 75.1542% ( 82) 00:14:50.191 16681.891 - 16801.047: 76.1410% ( 96) 00:14:50.191 16801.047 - 16920.204: 77.0662% ( 90) 00:14:50.191 16920.204 - 17039.360: 78.0016% ( 91) 00:14:50.191 17039.360 - 17158.516: 78.9062% ( 88) 00:14:50.191 17158.516 - 17277.673: 79.7800% ( 85) 00:14:50.191 17277.673 - 17396.829: 80.6641% ( 86) 00:14:50.191 17396.829 - 17515.985: 81.5687% ( 88) 00:14:50.191 17515.985 - 17635.142: 82.4836% ( 89) 00:14:50.191 17635.142 - 17754.298: 83.4190% ( 91) 00:14:50.191 17754.298 - 17873.455: 84.3853% ( 94) 00:14:50.191 17873.455 - 17992.611: 85.3002% ( 89) 00:14:50.191 17992.611 - 18111.767: 86.1328% ( 81) 00:14:50.191 18111.767 - 18230.924: 87.1505% ( 99) 00:14:50.191 18230.924 - 18350.080: 88.1065% ( 93) 00:14:50.191 18350.080 - 18469.236: 88.9803% ( 85) 00:14:50.191 18469.236 - 18588.393: 89.9054% ( 90) 00:14:50.191 18588.393 - 18707.549: 90.8203% ( 89) 00:14:50.191 18707.549 - 18826.705: 91.6735% ( 83) 00:14:50.191 18826.705 - 18945.862: 92.4753% ( 78) 00:14:50.191 18945.862 - 19065.018: 93.2257% ( 73) 00:14:50.191 19065.018 - 19184.175: 93.8734% ( 63) 00:14:50.191 19184.175 - 19303.331: 94.4490% ( 56) 00:14:50.191 19303.331 - 19422.487: 94.8499% ( 39) 00:14:50.191 19422.487 - 19541.644: 95.2200% ( 36) 00:14:50.191 19541.644 - 19660.800: 95.5387% ( 31) 00:14:50.191 19660.800 - 19779.956: 95.8265% ( 28) 00:14:50.191 19779.956 - 19899.113: 96.1657% ( 33) 00:14:50.191 19899.113 - 20018.269: 96.4741% ( 30) 00:14:50.191 20018.269 - 20137.425: 96.7311% ( 25) 00:14:50.191 20137.425 - 20256.582: 97.0498% ( 31) 00:14:50.191 20256.582 - 20375.738: 97.2759% ( 22) 00:14:50.191 20375.738 - 20494.895: 97.5329% ( 25) 00:14:50.191 20494.895 - 20614.051: 97.8104% ( 27) 00:14:50.191 20614.051 - 20733.207: 98.0058% ( 19) 00:14:50.191 20733.207 - 20852.364: 98.1908% ( 18) 00:14:50.191 20852.364 - 20971.520: 98.3347% ( 14) 00:14:50.191 20971.520 - 21090.676: 98.4581% ( 12) 00:14:50.191 21090.676 - 21209.833: 98.5197% ( 6) 00:14:50.191 21209.833 - 21328.989: 98.5814% ( 6) 00:14:50.191 21328.989 - 21448.145: 98.6431% ( 6) 00:14:50.191 21448.145 - 21567.302: 98.7150% ( 7) 00:14:50.191 21567.302 - 21686.458: 98.7767% ( 6) 00:14:50.191 21686.458 - 21805.615: 98.8487% ( 7) 00:14:50.191 21805.615 - 21924.771: 98.9104% ( 6) 00:14:50.191 21924.771 - 22043.927: 98.9720% ( 6) 00:14:50.191 22043.927 - 22163.084: 99.0337% ( 6) 00:14:50.191 22163.084 - 22282.240: 99.0954% ( 6) 00:14:50.191 22282.240 - 22401.396: 99.1262% ( 3) 00:14:50.191 22401.396 - 22520.553: 99.1468% ( 2) 00:14:50.191 22520.553 - 22639.709: 99.1674% ( 2) 00:14:50.191 22639.709 - 22758.865: 99.1982% ( 3) 00:14:50.191 22758.865 - 22878.022: 99.2188% ( 2) 00:14:50.191 22878.022 - 22997.178: 99.2393% ( 2) 00:14:50.191 22997.178 - 23116.335: 99.2599% ( 2) 00:14:50.191 23116.335 - 23235.491: 99.2804% ( 2) 00:14:50.191 23235.491 - 23354.647: 99.3010% ( 2) 00:14:50.191 23354.647 - 23473.804: 99.3215% ( 2) 00:14:50.191 23473.804 - 23592.960: 99.3421% ( 2) 00:14:50.191 30980.655 - 31218.967: 99.3524% ( 1) 00:14:50.191 31218.967 - 31457.280: 99.3935% ( 4) 00:14:50.191 31457.280 - 31695.593: 99.4449% ( 5) 00:14:50.191 31695.593 - 31933.905: 99.4963% ( 5) 00:14:50.191 31933.905 - 32172.218: 99.5477% ( 5) 00:14:50.191 32172.218 - 32410.531: 99.5991% ( 5) 00:14:50.191 32410.531 - 32648.844: 99.6505% ( 5) 00:14:50.191 32648.844 - 32887.156: 99.6916% ( 4) 00:14:50.191 32887.156 - 33125.469: 99.7327% ( 4) 00:14:50.191 33125.469 - 33363.782: 99.7841% ( 5) 00:14:50.191 33363.782 - 33602.095: 99.8458% ( 6) 00:14:50.191 33602.095 - 33840.407: 99.8869% ( 4) 00:14:50.191 33840.407 - 34078.720: 99.9486% ( 6) 00:14:50.191 34078.720 - 34317.033: 100.0000% ( 5) 00:14:50.191 00:14:50.191 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:50.191 ============================================================================== 00:14:50.191 Range in us Cumulative IO count 00:14:50.191 7536.640 - 7566.429: 0.0206% ( 2) 00:14:50.191 7566.429 - 7596.218: 0.0411% ( 2) 00:14:50.191 7596.218 - 7626.007: 0.0617% ( 2) 00:14:50.191 7626.007 - 7685.585: 0.1234% ( 6) 00:14:50.191 7685.585 - 7745.164: 0.2056% ( 8) 00:14:50.191 7745.164 - 7804.742: 0.2775% ( 7) 00:14:50.191 7804.742 - 7864.320: 0.3701% ( 9) 00:14:50.191 7864.320 - 7923.898: 0.5243% ( 15) 00:14:50.191 7923.898 - 7983.476: 0.6887% ( 16) 00:14:50.191 7983.476 - 8043.055: 0.8532% ( 16) 00:14:50.191 8043.055 - 8102.633: 1.0382% ( 18) 00:14:50.191 8102.633 - 8162.211: 1.2336% ( 19) 00:14:50.191 8162.211 - 8221.789: 1.5317% ( 29) 00:14:50.191 8221.789 - 8281.367: 1.9326% ( 39) 00:14:50.191 8281.367 - 8340.945: 2.3951% ( 45) 00:14:50.191 8340.945 - 8400.524: 2.9400% ( 53) 00:14:50.191 8400.524 - 8460.102: 3.5670% ( 61) 00:14:50.191 8460.102 - 8519.680: 4.2660% ( 68) 00:14:50.191 8519.680 - 8579.258: 4.9959% ( 71) 00:14:50.191 8579.258 - 8638.836: 5.7771% ( 76) 00:14:50.191 8638.836 - 8698.415: 6.5481% ( 75) 00:14:50.191 8698.415 - 8757.993: 7.2574% ( 69) 00:14:50.191 8757.993 - 8817.571: 7.9975% ( 72) 00:14:50.191 8817.571 - 8877.149: 8.7377% ( 72) 00:14:50.191 8877.149 - 8936.727: 9.4161% ( 66) 00:14:50.191 8936.727 - 8996.305: 10.0946% ( 66) 00:14:50.191 8996.305 - 9055.884: 10.8450% ( 73) 00:14:50.191 9055.884 - 9115.462: 11.5029% ( 64) 00:14:50.191 9115.462 - 9175.040: 12.1608% ( 64) 00:14:50.191 9175.040 - 9234.618: 12.8701% ( 69) 00:14:50.191 9234.618 - 9294.196: 13.5794% ( 69) 00:14:50.191 9294.196 - 9353.775: 14.2475% ( 65) 00:14:50.191 9353.775 - 9413.353: 14.8643% ( 60) 00:14:50.191 9413.353 - 9472.931: 15.3988% ( 52) 00:14:50.191 9472.931 - 9532.509: 15.7895% ( 38) 00:14:50.191 9532.509 - 9592.087: 16.1904% ( 39) 00:14:50.191 9592.087 - 9651.665: 16.5502% ( 35) 00:14:50.191 9651.665 - 9711.244: 16.9305% ( 37) 00:14:50.191 9711.244 - 9770.822: 17.3314% ( 39) 00:14:50.191 9770.822 - 9830.400: 17.8146% ( 47) 00:14:50.191 9830.400 - 9889.978: 18.5033% ( 67) 00:14:50.191 9889.978 - 9949.556: 19.3976% ( 87) 00:14:50.191 9949.556 - 10009.135: 20.5284% ( 110) 00:14:50.191 10009.135 - 10068.713: 21.7002% ( 114) 00:14:50.191 10068.713 - 10128.291: 22.8516% ( 112) 00:14:50.191 10128.291 - 10187.869: 24.0543% ( 117) 00:14:50.191 10187.869 - 10247.447: 25.3803% ( 129) 00:14:50.191 10247.447 - 10307.025: 26.7167% ( 130) 00:14:50.191 10307.025 - 10366.604: 27.9811% ( 123) 00:14:50.191 10366.604 - 10426.182: 29.2558% ( 124) 00:14:50.191 10426.182 - 10485.760: 30.5818% ( 129) 00:14:50.191 10485.760 - 10545.338: 31.9079% ( 129) 00:14:50.191 10545.338 - 10604.916: 33.3882% ( 144) 00:14:50.191 10604.916 - 10664.495: 34.7553% ( 133) 00:14:50.191 10664.495 - 10724.073: 36.2048% ( 141) 00:14:50.191 10724.073 - 10783.651: 37.6131% ( 137) 00:14:50.191 10783.651 - 10843.229: 38.8261% ( 118) 00:14:50.191 10843.229 - 10902.807: 39.8849% ( 103) 00:14:50.191 10902.807 - 10962.385: 40.7792% ( 87) 00:14:50.191 10962.385 - 11021.964: 41.4576% ( 66) 00:14:50.191 11021.964 - 11081.542: 41.9716% ( 50) 00:14:50.191 11081.542 - 11141.120: 42.5576% ( 57) 00:14:50.191 11141.120 - 11200.698: 43.1846% ( 61) 00:14:50.191 11200.698 - 11260.276: 43.8939% ( 69) 00:14:50.191 11260.276 - 11319.855: 44.7163% ( 80) 00:14:50.191 11319.855 - 11379.433: 45.6620% ( 92) 00:14:50.191 11379.433 - 11439.011: 46.5049% ( 82) 00:14:50.191 11439.011 - 11498.589: 47.3684% ( 84) 00:14:50.191 11498.589 - 11558.167: 48.2936% ( 90) 00:14:50.191 11558.167 - 11617.745: 49.1571% ( 84) 00:14:50.191 11617.745 - 11677.324: 50.0103% ( 83) 00:14:50.191 11677.324 - 11736.902: 50.7710% ( 74) 00:14:50.191 11736.902 - 11796.480: 51.5419% ( 75) 00:14:50.191 11796.480 - 11856.058: 52.2512% ( 69) 00:14:50.191 11856.058 - 11915.636: 53.0016% ( 73) 00:14:50.191 11915.636 - 11975.215: 53.7418% ( 72) 00:14:50.191 11975.215 - 12034.793: 54.4305% ( 67) 00:14:50.191 12034.793 - 12094.371: 55.1398% ( 69) 00:14:50.191 12094.371 - 12153.949: 55.8594% ( 70) 00:14:50.191 12153.949 - 12213.527: 56.5584% ( 68) 00:14:50.191 12213.527 - 12273.105: 57.1752% ( 60) 00:14:50.191 12273.105 - 12332.684: 57.8433% ( 65) 00:14:50.191 12332.684 - 12392.262: 58.5218% ( 66) 00:14:50.191 12392.262 - 12451.840: 59.1283% ( 59) 00:14:50.191 12451.840 - 12511.418: 59.7451% ( 60) 00:14:50.191 12511.418 - 12570.996: 60.4544% ( 69) 00:14:50.191 12570.996 - 12630.575: 61.0917% ( 62) 00:14:50.191 12630.575 - 12690.153: 61.7290% ( 62) 00:14:50.191 12690.153 - 12749.731: 62.3664% ( 62) 00:14:50.191 12749.731 - 12809.309: 62.9729% ( 59) 00:14:50.191 12809.309 - 12868.887: 63.4766% ( 49) 00:14:50.191 12868.887 - 12928.465: 63.9186% ( 43) 00:14:50.191 12928.465 - 12988.044: 64.2578% ( 33) 00:14:50.191 12988.044 - 13047.622: 64.5045% ( 24) 00:14:50.191 13047.622 - 13107.200: 64.6793% ( 17) 00:14:50.191 13107.200 - 13166.778: 64.8129% ( 13) 00:14:50.191 13166.778 - 13226.356: 64.9774% ( 16) 00:14:50.191 13226.356 - 13285.935: 65.1110% ( 13) 00:14:50.191 13285.935 - 13345.513: 65.2344% ( 12) 00:14:50.191 13345.513 - 13405.091: 65.3783% ( 14) 00:14:50.191 13405.091 - 13464.669: 65.5428% ( 16) 00:14:50.191 13464.669 - 13524.247: 65.7072% ( 16) 00:14:50.191 13524.247 - 13583.825: 65.8923% ( 18) 00:14:50.191 13583.825 - 13643.404: 66.0465% ( 15) 00:14:50.191 13643.404 - 13702.982: 66.1698% ( 12) 00:14:50.191 13702.982 - 13762.560: 66.3035% ( 13) 00:14:50.191 13762.560 - 13822.138: 66.4268% ( 12) 00:14:50.191 13822.138 - 13881.716: 66.5502% ( 12) 00:14:50.191 13881.716 - 13941.295: 66.6941% ( 14) 00:14:50.191 13941.295 - 14000.873: 66.8072% ( 11) 00:14:50.192 14000.873 - 14060.451: 66.9100% ( 10) 00:14:50.192 14060.451 - 14120.029: 67.0230% ( 11) 00:14:50.192 14120.029 - 14179.607: 67.1053% ( 8) 00:14:50.192 14179.607 - 14239.185: 67.2286% ( 12) 00:14:50.192 14239.185 - 14298.764: 67.3520% ( 12) 00:14:50.192 14298.764 - 14358.342: 67.4548% ( 10) 00:14:50.192 14358.342 - 14417.920: 67.5370% ( 8) 00:14:50.192 14417.920 - 14477.498: 67.6192% ( 8) 00:14:50.192 14477.498 - 14537.076: 67.7015% ( 8) 00:14:50.192 14537.076 - 14596.655: 67.7734% ( 7) 00:14:50.192 14596.655 - 14656.233: 67.8660% ( 9) 00:14:50.192 14656.233 - 14715.811: 67.9482% ( 8) 00:14:50.192 14715.811 - 14775.389: 68.0201% ( 7) 00:14:50.192 14775.389 - 14834.967: 68.1127% ( 9) 00:14:50.192 14834.967 - 14894.545: 68.1641% ( 5) 00:14:50.192 14894.545 - 14954.124: 68.2155% ( 5) 00:14:50.192 14954.124 - 15013.702: 68.3183% ( 10) 00:14:50.192 15013.702 - 15073.280: 68.3799% ( 6) 00:14:50.192 15073.280 - 15132.858: 68.4622% ( 8) 00:14:50.192 15132.858 - 15192.436: 68.5752% ( 11) 00:14:50.192 15192.436 - 15252.015: 68.6883% ( 11) 00:14:50.192 15252.015 - 15371.171: 68.9864% ( 29) 00:14:50.192 15371.171 - 15490.327: 69.3257% ( 33) 00:14:50.192 15490.327 - 15609.484: 69.6649% ( 33) 00:14:50.192 15609.484 - 15728.640: 70.1069% ( 43) 00:14:50.192 15728.640 - 15847.796: 70.5078% ( 39) 00:14:50.192 15847.796 - 15966.953: 70.9498% ( 43) 00:14:50.192 15966.953 - 16086.109: 71.4741% ( 51) 00:14:50.192 16086.109 - 16205.265: 72.0395% ( 55) 00:14:50.192 16205.265 - 16324.422: 72.6151% ( 56) 00:14:50.192 16324.422 - 16443.578: 73.3450% ( 71) 00:14:50.192 16443.578 - 16562.735: 74.2599% ( 89) 00:14:50.192 16562.735 - 16681.891: 75.3289% ( 104) 00:14:50.192 16681.891 - 16801.047: 76.3158% ( 96) 00:14:50.192 16801.047 - 16920.204: 77.3540% ( 101) 00:14:50.192 16920.204 - 17039.360: 78.3409% ( 96) 00:14:50.192 17039.360 - 17158.516: 79.3277% ( 96) 00:14:50.192 17158.516 - 17277.673: 80.3351% ( 98) 00:14:50.192 17277.673 - 17396.829: 81.2603% ( 90) 00:14:50.192 17396.829 - 17515.985: 82.1649% ( 88) 00:14:50.192 17515.985 - 17635.142: 83.0181% ( 83) 00:14:50.192 17635.142 - 17754.298: 83.8096% ( 77) 00:14:50.192 17754.298 - 17873.455: 84.5498% ( 72) 00:14:50.192 17873.455 - 17992.611: 85.3618% ( 79) 00:14:50.192 17992.611 - 18111.767: 86.2150% ( 83) 00:14:50.192 18111.767 - 18230.924: 87.0683% ( 83) 00:14:50.192 18230.924 - 18350.080: 87.9729% ( 88) 00:14:50.192 18350.080 - 18469.236: 88.8980% ( 90) 00:14:50.192 18469.236 - 18588.393: 89.8951% ( 97) 00:14:50.192 18588.393 - 18707.549: 90.8203% ( 90) 00:14:50.192 18707.549 - 18826.705: 91.6735% ( 83) 00:14:50.192 18826.705 - 18945.862: 92.4445% ( 75) 00:14:50.192 18945.862 - 19065.018: 93.2257% ( 76) 00:14:50.192 19065.018 - 19184.175: 93.8734% ( 63) 00:14:50.192 19184.175 - 19303.331: 94.4285% ( 54) 00:14:50.192 19303.331 - 19422.487: 94.8808% ( 44) 00:14:50.192 19422.487 - 19541.644: 95.2508% ( 36) 00:14:50.192 19541.644 - 19660.800: 95.6003% ( 34) 00:14:50.192 19660.800 - 19779.956: 95.9087% ( 30) 00:14:50.192 19779.956 - 19899.113: 96.2582% ( 34) 00:14:50.192 19899.113 - 20018.269: 96.5872% ( 32) 00:14:50.192 20018.269 - 20137.425: 96.9470% ( 35) 00:14:50.192 20137.425 - 20256.582: 97.2965% ( 34) 00:14:50.192 20256.582 - 20375.738: 97.6151% ( 31) 00:14:50.192 20375.738 - 20494.895: 97.8927% ( 27) 00:14:50.192 20494.895 - 20614.051: 98.1908% ( 29) 00:14:50.192 20614.051 - 20733.207: 98.4478% ( 25) 00:14:50.192 20733.207 - 20852.364: 98.6842% ( 23) 00:14:50.192 20852.364 - 20971.520: 98.8590% ( 17) 00:14:50.192 20971.520 - 21090.676: 98.9823% ( 12) 00:14:50.192 21090.676 - 21209.833: 99.0646% ( 8) 00:14:50.192 21209.833 - 21328.989: 99.1057% ( 4) 00:14:50.192 21328.989 - 21448.145: 99.1468% ( 4) 00:14:50.192 21448.145 - 21567.302: 99.1879% ( 4) 00:14:50.192 21567.302 - 21686.458: 99.2290% ( 4) 00:14:50.192 21686.458 - 21805.615: 99.2804% ( 5) 00:14:50.192 21805.615 - 21924.771: 99.3215% ( 4) 00:14:50.192 21924.771 - 22043.927: 99.3421% ( 2) 00:14:50.192 28954.996 - 29074.153: 99.3627% ( 2) 00:14:50.192 29074.153 - 29193.309: 99.3935% ( 3) 00:14:50.192 29193.309 - 29312.465: 99.4141% ( 2) 00:14:50.192 29312.465 - 29431.622: 99.4449% ( 3) 00:14:50.192 29431.622 - 29550.778: 99.4757% ( 3) 00:14:50.192 29550.778 - 29669.935: 99.5066% ( 3) 00:14:50.192 29669.935 - 29789.091: 99.5271% ( 2) 00:14:50.192 29789.091 - 29908.247: 99.5580% ( 3) 00:14:50.192 29908.247 - 30027.404: 99.5785% ( 2) 00:14:50.192 30027.404 - 30146.560: 99.6094% ( 3) 00:14:50.192 30146.560 - 30265.716: 99.6402% ( 3) 00:14:50.192 30265.716 - 30384.873: 99.6711% ( 3) 00:14:50.192 30384.873 - 30504.029: 99.6916% ( 2) 00:14:50.192 30504.029 - 30742.342: 99.7430% ( 5) 00:14:50.192 30742.342 - 30980.655: 99.8047% ( 6) 00:14:50.192 30980.655 - 31218.967: 99.8561% ( 5) 00:14:50.192 31218.967 - 31457.280: 99.9075% ( 5) 00:14:50.192 31457.280 - 31695.593: 99.9692% ( 6) 00:14:50.192 31695.593 - 31933.905: 100.0000% ( 3) 00:14:50.192 00:14:50.450 13:34:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:51.828 Initializing NVMe Controllers 00:14:51.828 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:51.828 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:51.828 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:51.828 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:51.828 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:51.828 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:51.828 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:51.828 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:51.828 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:51.828 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:51.828 Initialization complete. Launching workers. 00:14:51.828 ======================================================== 00:14:51.828 Latency(us) 00:14:51.828 Device Information : IOPS MiB/s Average min max 00:14:51.828 PCIE (0000:00:10.0) NSID 1 from core 0: 7316.59 85.74 17506.82 9742.24 52540.69 00:14:51.828 PCIE (0000:00:11.0) NSID 1 from core 0: 7316.59 85.74 17436.10 9853.23 48083.59 00:14:51.828 PCIE (0000:00:13.0) NSID 1 from core 0: 7316.59 85.74 17363.00 9931.11 44545.46 00:14:51.828 PCIE (0000:00:12.0) NSID 1 from core 0: 7316.59 85.74 17288.59 10003.73 40449.15 00:14:51.828 PCIE (0000:00:12.0) NSID 2 from core 0: 7316.59 85.74 17214.61 10116.45 36369.93 00:14:51.828 PCIE (0000:00:12.0) NSID 3 from core 0: 7316.59 85.74 17141.92 9879.76 32216.49 00:14:51.828 ======================================================== 00:14:51.828 Total : 43899.54 514.45 17325.17 9742.24 52540.69 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10307.025us 00:14:51.828 10.00000% : 12690.153us 00:14:51.828 25.00000% : 14417.920us 00:14:51.828 50.00000% : 17515.985us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21328.989us 00:14:51.828 95.00000% : 22282.240us 00:14:51.828 98.00000% : 24427.055us 00:14:51.828 99.00000% : 41466.415us 00:14:51.828 99.50000% : 50522.298us 00:14:51.828 99.90000% : 52190.487us 00:14:51.828 99.99000% : 52667.113us 00:14:51.828 99.99900% : 52667.113us 00:14:51.828 99.99990% : 52667.113us 00:14:51.828 99.99999% : 52667.113us 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10604.916us 00:14:51.828 10.00000% : 12690.153us 00:14:51.828 25.00000% : 14239.185us 00:14:51.828 50.00000% : 17515.985us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21209.833us 00:14:51.828 95.00000% : 22043.927us 00:14:51.828 98.00000% : 24188.742us 00:14:51.828 99.00000% : 38130.036us 00:14:51.828 99.50000% : 46232.669us 00:14:51.828 99.90000% : 47900.858us 00:14:51.828 99.99000% : 48139.171us 00:14:51.828 99.99900% : 48139.171us 00:14:51.828 99.99990% : 48139.171us 00:14:51.828 99.99999% : 48139.171us 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10604.916us 00:14:51.828 10.00000% : 12570.996us 00:14:51.828 25.00000% : 14179.607us 00:14:51.828 50.00000% : 17515.985us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21209.833us 00:14:51.828 95.00000% : 22282.240us 00:14:51.828 98.00000% : 24665.367us 00:14:51.828 99.00000% : 34555.345us 00:14:51.828 99.50000% : 42657.978us 00:14:51.828 99.90000% : 44326.167us 00:14:51.828 99.99000% : 44564.480us 00:14:51.828 99.99900% : 44564.480us 00:14:51.828 99.99990% : 44564.480us 00:14:51.828 99.99999% : 44564.480us 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10426.182us 00:14:51.828 10.00000% : 12273.105us 00:14:51.828 25.00000% : 14358.342us 00:14:51.828 50.00000% : 17515.985us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21328.989us 00:14:51.828 95.00000% : 22282.240us 00:14:51.828 98.00000% : 24546.211us 00:14:51.828 99.00000% : 30384.873us 00:14:51.828 99.50000% : 38606.662us 00:14:51.828 99.90000% : 40274.851us 00:14:51.828 99.99000% : 40513.164us 00:14:51.828 99.99900% : 40513.164us 00:14:51.828 99.99990% : 40513.164us 00:14:51.828 99.99999% : 40513.164us 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10426.182us 00:14:51.828 10.00000% : 12273.105us 00:14:51.828 25.00000% : 14417.920us 00:14:51.828 50.00000% : 17635.142us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21209.833us 00:14:51.828 95.00000% : 22163.084us 00:14:51.828 98.00000% : 23950.429us 00:14:51.828 99.00000% : 26095.244us 00:14:51.828 99.50000% : 34555.345us 00:14:51.828 99.90000% : 36223.535us 00:14:51.828 99.99000% : 36461.847us 00:14:51.828 99.99900% : 36461.847us 00:14:51.828 99.99990% : 36461.847us 00:14:51.828 99.99999% : 36461.847us 00:14:51.828 00:14:51.828 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:51.828 ================================================================================= 00:14:51.828 1.00000% : 10485.760us 00:14:51.828 10.00000% : 12273.105us 00:14:51.828 25.00000% : 14537.076us 00:14:51.828 50.00000% : 17515.985us 00:14:51.828 75.00000% : 19899.113us 00:14:51.828 90.00000% : 21209.833us 00:14:51.828 95.00000% : 22043.927us 00:14:51.828 98.00000% : 23473.804us 00:14:51.828 99.00000% : 24427.055us 00:14:51.828 99.50000% : 30384.873us 00:14:51.828 99.90000% : 31933.905us 00:14:51.828 99.99000% : 32410.531us 00:14:51.828 99.99900% : 32410.531us 00:14:51.828 99.99990% : 32410.531us 00:14:51.828 99.99999% : 32410.531us 00:14:51.828 00:14:51.828 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:51.828 ============================================================================== 00:14:51.828 Range in us Cumulative IO count 00:14:51.828 9711.244 - 9770.822: 0.0679% ( 5) 00:14:51.828 9770.822 - 9830.400: 0.1495% ( 6) 00:14:51.828 9830.400 - 9889.978: 0.2582% ( 8) 00:14:51.828 9889.978 - 9949.556: 0.3668% ( 8) 00:14:51.828 9949.556 - 10009.135: 0.4755% ( 8) 00:14:51.828 10009.135 - 10068.713: 0.5978% ( 9) 00:14:51.828 10068.713 - 10128.291: 0.7065% ( 8) 00:14:51.828 10128.291 - 10187.869: 0.8288% ( 9) 00:14:51.828 10187.869 - 10247.447: 0.9239% ( 7) 00:14:51.828 10247.447 - 10307.025: 1.0598% ( 10) 00:14:51.828 10307.025 - 10366.604: 1.2636% ( 15) 00:14:51.828 10366.604 - 10426.182: 1.4130% ( 11) 00:14:51.828 10426.182 - 10485.760: 1.4946% ( 6) 00:14:51.828 10485.760 - 10545.338: 1.6304% ( 10) 00:14:51.828 10545.338 - 10604.916: 1.7935% ( 12) 00:14:51.828 10604.916 - 10664.495: 1.9158% ( 9) 00:14:51.828 10664.495 - 10724.073: 2.1196% ( 15) 00:14:51.828 10724.073 - 10783.651: 2.3234% ( 15) 00:14:51.828 10783.651 - 10843.229: 2.4864% ( 12) 00:14:51.828 10843.229 - 10902.807: 2.6495% ( 12) 00:14:51.828 10902.807 - 10962.385: 2.8397% ( 14) 00:14:51.828 10962.385 - 11021.964: 3.0163% ( 13) 00:14:51.828 11021.964 - 11081.542: 3.2201% ( 15) 00:14:51.828 11081.542 - 11141.120: 3.4103% ( 14) 00:14:51.828 11141.120 - 11200.698: 3.6277% ( 16) 00:14:51.828 11200.698 - 11260.276: 3.8451% ( 16) 00:14:51.828 11260.276 - 11319.855: 4.0217% ( 13) 00:14:51.828 11319.855 - 11379.433: 4.1848% ( 12) 00:14:51.828 11379.433 - 11439.011: 4.3886% ( 15) 00:14:51.829 11439.011 - 11498.589: 4.5245% ( 10) 00:14:51.829 11498.589 - 11558.167: 4.6196% ( 7) 00:14:51.829 11558.167 - 11617.745: 4.7690% ( 11) 00:14:51.829 11617.745 - 11677.324: 4.9457% ( 13) 00:14:51.829 11677.324 - 11736.902: 5.1495% ( 15) 00:14:51.829 11736.902 - 11796.480: 5.3533% ( 15) 00:14:51.829 11796.480 - 11856.058: 5.6114% ( 19) 00:14:51.829 11856.058 - 11915.636: 5.8288% ( 16) 00:14:51.829 11915.636 - 11975.215: 6.0734% ( 18) 00:14:51.829 11975.215 - 12034.793: 6.3043% ( 17) 00:14:51.829 12034.793 - 12094.371: 6.6033% ( 22) 00:14:51.829 12094.371 - 12153.949: 6.8886% ( 21) 00:14:51.829 12153.949 - 12213.527: 7.1332% ( 18) 00:14:51.829 12213.527 - 12273.105: 7.4457% ( 23) 00:14:51.829 12273.105 - 12332.684: 7.7582% ( 23) 00:14:51.829 12332.684 - 12392.262: 8.2337% ( 35) 00:14:51.829 12392.262 - 12451.840: 8.6277% ( 29) 00:14:51.829 12451.840 - 12511.418: 9.0353% ( 30) 00:14:51.829 12511.418 - 12570.996: 9.3478% ( 23) 00:14:51.829 12570.996 - 12630.575: 9.8098% ( 34) 00:14:51.829 12630.575 - 12690.153: 10.2582% ( 33) 00:14:51.829 12690.153 - 12749.731: 10.7337% ( 35) 00:14:51.829 12749.731 - 12809.309: 11.1821% ( 33) 00:14:51.829 12809.309 - 12868.887: 11.6033% ( 31) 00:14:51.829 12868.887 - 12928.465: 12.0245% ( 31) 00:14:51.829 12928.465 - 12988.044: 12.5000% ( 35) 00:14:51.829 12988.044 - 13047.622: 12.9348% ( 32) 00:14:51.829 13047.622 - 13107.200: 13.5190% ( 43) 00:14:51.829 13107.200 - 13166.778: 13.9946% ( 35) 00:14:51.829 13166.778 - 13226.356: 14.5788% ( 43) 00:14:51.829 13226.356 - 13285.935: 15.2038% ( 46) 00:14:51.829 13285.935 - 13345.513: 15.7065% ( 37) 00:14:51.829 13345.513 - 13405.091: 16.3043% ( 44) 00:14:51.829 13405.091 - 13464.669: 16.7663% ( 34) 00:14:51.829 13464.669 - 13524.247: 17.3370% ( 42) 00:14:51.829 13524.247 - 13583.825: 17.7717% ( 32) 00:14:51.829 13583.825 - 13643.404: 18.2337% ( 34) 00:14:51.829 13643.404 - 13702.982: 18.6413% ( 30) 00:14:51.829 13702.982 - 13762.560: 19.1168% ( 35) 00:14:51.829 13762.560 - 13822.138: 19.5516% ( 32) 00:14:51.829 13822.138 - 13881.716: 20.0543% ( 37) 00:14:51.829 13881.716 - 13941.295: 20.5299% ( 35) 00:14:51.829 13941.295 - 14000.873: 21.0462% ( 38) 00:14:51.829 14000.873 - 14060.451: 21.6168% ( 42) 00:14:51.829 14060.451 - 14120.029: 22.2826% ( 49) 00:14:51.829 14120.029 - 14179.607: 22.9212% ( 47) 00:14:51.829 14179.607 - 14239.185: 23.5190% ( 44) 00:14:51.829 14239.185 - 14298.764: 24.0489% ( 39) 00:14:51.829 14298.764 - 14358.342: 24.7147% ( 49) 00:14:51.829 14358.342 - 14417.920: 25.2310% ( 38) 00:14:51.829 14417.920 - 14477.498: 25.7880% ( 41) 00:14:51.829 14477.498 - 14537.076: 26.3723% ( 43) 00:14:51.829 14537.076 - 14596.655: 26.8342% ( 34) 00:14:51.829 14596.655 - 14656.233: 27.5951% ( 56) 00:14:51.829 14656.233 - 14715.811: 28.2201% ( 46) 00:14:51.829 14715.811 - 14775.389: 28.8179% ( 44) 00:14:51.829 14775.389 - 14834.967: 29.4565% ( 47) 00:14:51.829 14834.967 - 14894.545: 30.0272% ( 42) 00:14:51.829 14894.545 - 14954.124: 30.8016% ( 57) 00:14:51.829 14954.124 - 15013.702: 31.4538% ( 48) 00:14:51.829 15013.702 - 15073.280: 32.0788% ( 46) 00:14:51.829 15073.280 - 15132.858: 32.7446% ( 49) 00:14:51.829 15132.858 - 15192.436: 33.2880% ( 40) 00:14:51.829 15192.436 - 15252.015: 33.9130% ( 46) 00:14:51.829 15252.015 - 15371.171: 35.0272% ( 82) 00:14:51.829 15371.171 - 15490.327: 36.3859% ( 100) 00:14:51.829 15490.327 - 15609.484: 37.5000% ( 82) 00:14:51.829 15609.484 - 15728.640: 38.7364% ( 91) 00:14:51.829 15728.640 - 15847.796: 39.8913% ( 85) 00:14:51.829 15847.796 - 15966.953: 40.8832% ( 73) 00:14:51.829 15966.953 - 16086.109: 41.8342% ( 70) 00:14:51.829 16086.109 - 16205.265: 42.7038% ( 64) 00:14:51.829 16205.265 - 16324.422: 43.4918% ( 58) 00:14:51.829 16324.422 - 16443.578: 44.0897% ( 44) 00:14:51.829 16443.578 - 16562.735: 44.7147% ( 46) 00:14:51.829 16562.735 - 16681.891: 45.3668% ( 48) 00:14:51.829 16681.891 - 16801.047: 46.0598% ( 51) 00:14:51.829 16801.047 - 16920.204: 46.6712% ( 45) 00:14:51.829 16920.204 - 17039.360: 47.3505% ( 50) 00:14:51.829 17039.360 - 17158.516: 47.9891% ( 47) 00:14:51.829 17158.516 - 17277.673: 48.6821% ( 51) 00:14:51.829 17277.673 - 17396.829: 49.4429% ( 56) 00:14:51.829 17396.829 - 17515.985: 50.2446% ( 59) 00:14:51.829 17515.985 - 17635.142: 51.1413% ( 66) 00:14:51.829 17635.142 - 17754.298: 52.1060% ( 71) 00:14:51.829 17754.298 - 17873.455: 53.0299% ( 68) 00:14:51.829 17873.455 - 17992.611: 54.1304% ( 81) 00:14:51.829 17992.611 - 18111.767: 55.0815% ( 70) 00:14:51.829 18111.767 - 18230.924: 56.2228% ( 84) 00:14:51.829 18230.924 - 18350.080: 57.4728% ( 92) 00:14:51.829 18350.080 - 18469.236: 58.7908% ( 97) 00:14:51.829 18469.236 - 18588.393: 60.1223% ( 98) 00:14:51.829 18588.393 - 18707.549: 61.6168% ( 110) 00:14:51.829 18707.549 - 18826.705: 63.0299% ( 104) 00:14:51.829 18826.705 - 18945.862: 64.4701% ( 106) 00:14:51.829 18945.862 - 19065.018: 65.8560% ( 102) 00:14:51.829 19065.018 - 19184.175: 67.3913% ( 113) 00:14:51.829 19184.175 - 19303.331: 68.8451% ( 107) 00:14:51.829 19303.331 - 19422.487: 70.3533% ( 111) 00:14:51.829 19422.487 - 19541.644: 71.9429% ( 117) 00:14:51.829 19541.644 - 19660.800: 73.3288% ( 102) 00:14:51.829 19660.800 - 19779.956: 74.7690% ( 106) 00:14:51.829 19779.956 - 19899.113: 76.1413% ( 101) 00:14:51.829 19899.113 - 20018.269: 77.3913% ( 92) 00:14:51.829 20018.269 - 20137.425: 78.6685% ( 94) 00:14:51.829 20137.425 - 20256.582: 79.8370% ( 86) 00:14:51.829 20256.582 - 20375.738: 81.1277% ( 95) 00:14:51.829 20375.738 - 20494.895: 82.4592% ( 98) 00:14:51.829 20494.895 - 20614.051: 83.6005% ( 84) 00:14:51.829 20614.051 - 20733.207: 84.7826% ( 87) 00:14:51.829 20733.207 - 20852.364: 85.9647% ( 87) 00:14:51.829 20852.364 - 20971.520: 87.1603% ( 88) 00:14:51.829 20971.520 - 21090.676: 88.1793% ( 75) 00:14:51.829 21090.676 - 21209.833: 89.2527% ( 79) 00:14:51.829 21209.833 - 21328.989: 90.1766% ( 68) 00:14:51.829 21328.989 - 21448.145: 91.0734% ( 66) 00:14:51.829 21448.145 - 21567.302: 91.9973% ( 68) 00:14:51.829 21567.302 - 21686.458: 92.7038% ( 52) 00:14:51.829 21686.458 - 21805.615: 93.4375% ( 54) 00:14:51.829 21805.615 - 21924.771: 94.0082% ( 42) 00:14:51.829 21924.771 - 22043.927: 94.5652% ( 41) 00:14:51.829 22043.927 - 22163.084: 94.9864% ( 31) 00:14:51.829 22163.084 - 22282.240: 95.3397% ( 26) 00:14:51.829 22282.240 - 22401.396: 95.6114% ( 20) 00:14:51.829 22401.396 - 22520.553: 96.0054% ( 29) 00:14:51.829 22520.553 - 22639.709: 96.2908% ( 21) 00:14:51.829 22639.709 - 22758.865: 96.5489% ( 19) 00:14:51.829 22758.865 - 22878.022: 96.6984% ( 11) 00:14:51.829 22878.022 - 22997.178: 96.8478% ( 11) 00:14:51.829 22997.178 - 23116.335: 96.9701% ( 9) 00:14:51.829 23116.335 - 23235.491: 97.0924% ( 9) 00:14:51.829 23235.491 - 23354.647: 97.2011% ( 8) 00:14:51.829 23354.647 - 23473.804: 97.3234% ( 9) 00:14:51.829 23473.804 - 23592.960: 97.4185% ( 7) 00:14:51.829 23592.960 - 23712.116: 97.5543% ( 10) 00:14:51.829 23712.116 - 23831.273: 97.6495% ( 7) 00:14:51.829 23831.273 - 23950.429: 97.7446% ( 7) 00:14:51.829 23950.429 - 24069.585: 97.8125% ( 5) 00:14:51.829 24069.585 - 24188.742: 97.8804% ( 5) 00:14:51.829 24188.742 - 24307.898: 97.9484% ( 5) 00:14:51.829 24307.898 - 24427.055: 98.0027% ( 4) 00:14:51.829 24427.055 - 24546.211: 98.0707% ( 5) 00:14:51.829 24546.211 - 24665.367: 98.1114% ( 3) 00:14:51.829 24665.367 - 24784.524: 98.1522% ( 3) 00:14:51.829 24784.524 - 24903.680: 98.2065% ( 4) 00:14:51.829 24903.680 - 25022.836: 98.2609% ( 4) 00:14:51.829 38368.349 - 38606.662: 98.3152% ( 4) 00:14:51.829 38606.662 - 38844.975: 98.3696% ( 4) 00:14:51.829 38844.975 - 39083.287: 98.4375% ( 5) 00:14:51.829 39083.287 - 39321.600: 98.4918% ( 4) 00:14:51.829 39321.600 - 39559.913: 98.5462% ( 4) 00:14:51.829 39559.913 - 39798.225: 98.6141% ( 5) 00:14:51.829 39798.225 - 40036.538: 98.6821% ( 5) 00:14:51.829 40036.538 - 40274.851: 98.7364% ( 4) 00:14:51.829 40274.851 - 40513.164: 98.7908% ( 4) 00:14:51.829 40513.164 - 40751.476: 98.8587% ( 5) 00:14:51.829 40751.476 - 40989.789: 98.9266% ( 5) 00:14:51.829 40989.789 - 41228.102: 98.9810% ( 4) 00:14:51.829 41228.102 - 41466.415: 99.0353% ( 4) 00:14:51.829 41466.415 - 41704.727: 99.1033% ( 5) 00:14:51.829 41704.727 - 41943.040: 99.1304% ( 2) 00:14:51.829 48615.796 - 48854.109: 99.1576% ( 2) 00:14:51.829 48854.109 - 49092.422: 99.2120% ( 4) 00:14:51.829 49092.422 - 49330.735: 99.2663% ( 4) 00:14:51.829 49330.735 - 49569.047: 99.3342% ( 5) 00:14:51.829 49569.047 - 49807.360: 99.3886% ( 4) 00:14:51.829 49807.360 - 50045.673: 99.4565% ( 5) 00:14:51.829 50045.673 - 50283.985: 99.4837% ( 2) 00:14:51.830 50283.985 - 50522.298: 99.5516% ( 5) 00:14:51.830 50522.298 - 50760.611: 99.6060% ( 4) 00:14:51.830 50760.611 - 50998.924: 99.6603% ( 4) 00:14:51.830 50998.924 - 51237.236: 99.7147% ( 4) 00:14:51.830 51237.236 - 51475.549: 99.7690% ( 4) 00:14:51.830 51475.549 - 51713.862: 99.8098% ( 3) 00:14:51.830 51713.862 - 51952.175: 99.8641% ( 4) 00:14:51.830 51952.175 - 52190.487: 99.9185% ( 4) 00:14:51.830 52190.487 - 52428.800: 99.9728% ( 4) 00:14:51.830 52428.800 - 52667.113: 100.0000% ( 2) 00:14:51.830 00:14:51.830 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:51.830 ============================================================================== 00:14:51.830 Range in us Cumulative IO count 00:14:51.830 9830.400 - 9889.978: 0.0408% ( 3) 00:14:51.830 9889.978 - 9949.556: 0.0951% ( 4) 00:14:51.830 9949.556 - 10009.135: 0.1495% ( 4) 00:14:51.830 10009.135 - 10068.713: 0.2038% ( 4) 00:14:51.830 10068.713 - 10128.291: 0.2582% ( 4) 00:14:51.830 10128.291 - 10187.869: 0.3125% ( 4) 00:14:51.830 10187.869 - 10247.447: 0.4348% ( 9) 00:14:51.830 10247.447 - 10307.025: 0.5571% ( 9) 00:14:51.830 10307.025 - 10366.604: 0.6658% ( 8) 00:14:51.830 10366.604 - 10426.182: 0.7745% ( 8) 00:14:51.830 10426.182 - 10485.760: 0.8560% ( 6) 00:14:51.830 10485.760 - 10545.338: 0.9375% ( 6) 00:14:51.830 10545.338 - 10604.916: 1.0054% ( 5) 00:14:51.830 10604.916 - 10664.495: 1.1685% ( 12) 00:14:51.830 10664.495 - 10724.073: 1.3859% ( 16) 00:14:51.830 10724.073 - 10783.651: 1.5489% ( 12) 00:14:51.830 10783.651 - 10843.229: 1.7255% ( 13) 00:14:51.830 10843.229 - 10902.807: 1.9293% ( 15) 00:14:51.830 10902.807 - 10962.385: 2.0924% ( 12) 00:14:51.830 10962.385 - 11021.964: 2.2554% ( 12) 00:14:51.830 11021.964 - 11081.542: 2.3505% ( 7) 00:14:51.830 11081.542 - 11141.120: 2.4592% ( 8) 00:14:51.830 11141.120 - 11200.698: 2.7038% ( 18) 00:14:51.830 11200.698 - 11260.276: 2.9212% ( 16) 00:14:51.830 11260.276 - 11319.855: 3.0978% ( 13) 00:14:51.830 11319.855 - 11379.433: 3.3560% ( 19) 00:14:51.830 11379.433 - 11439.011: 3.6277% ( 20) 00:14:51.830 11439.011 - 11498.589: 3.9130% ( 21) 00:14:51.830 11498.589 - 11558.167: 4.2527% ( 25) 00:14:51.830 11558.167 - 11617.745: 4.4837% ( 17) 00:14:51.830 11617.745 - 11677.324: 4.5924% ( 8) 00:14:51.830 11677.324 - 11736.902: 4.7554% ( 12) 00:14:51.830 11736.902 - 11796.480: 4.8913% ( 10) 00:14:51.830 11796.480 - 11856.058: 5.0543% ( 12) 00:14:51.830 11856.058 - 11915.636: 5.3533% ( 22) 00:14:51.830 11915.636 - 11975.215: 5.7201% ( 27) 00:14:51.830 11975.215 - 12034.793: 6.0326% ( 23) 00:14:51.830 12034.793 - 12094.371: 6.3587% ( 24) 00:14:51.830 12094.371 - 12153.949: 6.6304% ( 20) 00:14:51.830 12153.949 - 12213.527: 6.9022% ( 20) 00:14:51.830 12213.527 - 12273.105: 7.2554% ( 26) 00:14:51.830 12273.105 - 12332.684: 7.6087% ( 26) 00:14:51.830 12332.684 - 12392.262: 7.9755% ( 27) 00:14:51.830 12392.262 - 12451.840: 8.4783% ( 37) 00:14:51.830 12451.840 - 12511.418: 8.8315% ( 26) 00:14:51.830 12511.418 - 12570.996: 9.2527% ( 31) 00:14:51.830 12570.996 - 12630.575: 9.6060% ( 26) 00:14:51.830 12630.575 - 12690.153: 10.0000% ( 29) 00:14:51.830 12690.153 - 12749.731: 10.4212% ( 31) 00:14:51.830 12749.731 - 12809.309: 10.8696% ( 33) 00:14:51.830 12809.309 - 12868.887: 11.3587% ( 36) 00:14:51.830 12868.887 - 12928.465: 11.8614% ( 37) 00:14:51.830 12928.465 - 12988.044: 12.4864% ( 46) 00:14:51.830 12988.044 - 13047.622: 13.0571% ( 42) 00:14:51.830 13047.622 - 13107.200: 13.6549% ( 44) 00:14:51.830 13107.200 - 13166.778: 14.2120% ( 41) 00:14:51.830 13166.778 - 13226.356: 14.6332% ( 31) 00:14:51.830 13226.356 - 13285.935: 15.0815% ( 33) 00:14:51.830 13285.935 - 13345.513: 15.6386% ( 41) 00:14:51.830 13345.513 - 13405.091: 16.2364% ( 44) 00:14:51.830 13405.091 - 13464.669: 16.8071% ( 42) 00:14:51.830 13464.669 - 13524.247: 17.2962% ( 36) 00:14:51.830 13524.247 - 13583.825: 18.0571% ( 56) 00:14:51.830 13583.825 - 13643.404: 18.6277% ( 42) 00:14:51.830 13643.404 - 13702.982: 19.3207% ( 51) 00:14:51.830 13702.982 - 13762.560: 20.0272% ( 52) 00:14:51.830 13762.560 - 13822.138: 20.9103% ( 65) 00:14:51.830 13822.138 - 13881.716: 21.6984% ( 58) 00:14:51.830 13881.716 - 13941.295: 22.4185% ( 53) 00:14:51.830 13941.295 - 14000.873: 23.0435% ( 46) 00:14:51.830 14000.873 - 14060.451: 23.6821% ( 47) 00:14:51.830 14060.451 - 14120.029: 24.3071% ( 46) 00:14:51.830 14120.029 - 14179.607: 24.9728% ( 49) 00:14:51.830 14179.607 - 14239.185: 25.6522% ( 50) 00:14:51.830 14239.185 - 14298.764: 26.2364% ( 43) 00:14:51.830 14298.764 - 14358.342: 26.9293% ( 51) 00:14:51.830 14358.342 - 14417.920: 27.5679% ( 47) 00:14:51.830 14417.920 - 14477.498: 28.2473% ( 50) 00:14:51.830 14477.498 - 14537.076: 28.9538% ( 52) 00:14:51.830 14537.076 - 14596.655: 29.6196% ( 49) 00:14:51.830 14596.655 - 14656.233: 30.1359% ( 38) 00:14:51.830 14656.233 - 14715.811: 30.5842% ( 33) 00:14:51.830 14715.811 - 14775.389: 31.1685% ( 43) 00:14:51.830 14775.389 - 14834.967: 31.7799% ( 45) 00:14:51.830 14834.967 - 14894.545: 32.4049% ( 46) 00:14:51.830 14894.545 - 14954.124: 32.9891% ( 43) 00:14:51.830 14954.124 - 15013.702: 33.4918% ( 37) 00:14:51.830 15013.702 - 15073.280: 34.0082% ( 38) 00:14:51.830 15073.280 - 15132.858: 34.4565% ( 33) 00:14:51.830 15132.858 - 15192.436: 34.8913% ( 32) 00:14:51.830 15192.436 - 15252.015: 35.2989% ( 30) 00:14:51.830 15252.015 - 15371.171: 36.0462% ( 55) 00:14:51.830 15371.171 - 15490.327: 36.9837% ( 69) 00:14:51.830 15490.327 - 15609.484: 37.7717% ( 58) 00:14:51.830 15609.484 - 15728.640: 38.7636% ( 73) 00:14:51.830 15728.640 - 15847.796: 39.7554% ( 73) 00:14:51.830 15847.796 - 15966.953: 40.5163% ( 56) 00:14:51.830 15966.953 - 16086.109: 41.1141% ( 44) 00:14:51.830 16086.109 - 16205.265: 41.7120% ( 44) 00:14:51.830 16205.265 - 16324.422: 42.2962% ( 43) 00:14:51.830 16324.422 - 16443.578: 42.7582% ( 34) 00:14:51.830 16443.578 - 16562.735: 43.4103% ( 48) 00:14:51.830 16562.735 - 16681.891: 44.0217% ( 45) 00:14:51.830 16681.891 - 16801.047: 44.7690% ( 55) 00:14:51.830 16801.047 - 16920.204: 45.6522% ( 65) 00:14:51.830 16920.204 - 17039.360: 46.7527% ( 81) 00:14:51.830 17039.360 - 17158.516: 47.6359% ( 65) 00:14:51.830 17158.516 - 17277.673: 48.4783% ( 62) 00:14:51.830 17277.673 - 17396.829: 49.3207% ( 62) 00:14:51.830 17396.829 - 17515.985: 50.2174% ( 66) 00:14:51.830 17515.985 - 17635.142: 51.0462% ( 61) 00:14:51.830 17635.142 - 17754.298: 51.8886% ( 62) 00:14:51.830 17754.298 - 17873.455: 53.0842% ( 88) 00:14:51.830 17873.455 - 17992.611: 54.0489% ( 71) 00:14:51.830 17992.611 - 18111.767: 54.9457% ( 66) 00:14:51.830 18111.767 - 18230.924: 55.8560% ( 67) 00:14:51.830 18230.924 - 18350.080: 56.8342% ( 72) 00:14:51.830 18350.080 - 18469.236: 57.8804% ( 77) 00:14:51.830 18469.236 - 18588.393: 59.0897% ( 89) 00:14:51.830 18588.393 - 18707.549: 60.2038% ( 82) 00:14:51.830 18707.549 - 18826.705: 61.6033% ( 103) 00:14:51.830 18826.705 - 18945.862: 62.8397% ( 91) 00:14:51.830 18945.862 - 19065.018: 64.1033% ( 93) 00:14:51.830 19065.018 - 19184.175: 65.5027% ( 103) 00:14:51.830 19184.175 - 19303.331: 66.9973% ( 110) 00:14:51.830 19303.331 - 19422.487: 68.5190% ( 112) 00:14:51.830 19422.487 - 19541.644: 70.0272% ( 111) 00:14:51.830 19541.644 - 19660.800: 71.7527% ( 127) 00:14:51.830 19660.800 - 19779.956: 73.3696% ( 119) 00:14:51.830 19779.956 - 19899.113: 75.0679% ( 125) 00:14:51.830 19899.113 - 20018.269: 76.6440% ( 116) 00:14:51.830 20018.269 - 20137.425: 78.2201% ( 116) 00:14:51.830 20137.425 - 20256.582: 79.8234% ( 118) 00:14:51.830 20256.582 - 20375.738: 81.3451% ( 112) 00:14:51.830 20375.738 - 20494.895: 82.8397% ( 110) 00:14:51.830 20494.895 - 20614.051: 84.3342% ( 110) 00:14:51.830 20614.051 - 20733.207: 85.7473% ( 104) 00:14:51.830 20733.207 - 20852.364: 87.0245% ( 94) 00:14:51.830 20852.364 - 20971.520: 88.1522% ( 83) 00:14:51.830 20971.520 - 21090.676: 89.3207% ( 86) 00:14:51.830 21090.676 - 21209.833: 90.4076% ( 80) 00:14:51.830 21209.833 - 21328.989: 91.4538% ( 77) 00:14:51.830 21328.989 - 21448.145: 92.3913% ( 69) 00:14:51.830 21448.145 - 21567.302: 93.2065% ( 60) 00:14:51.830 21567.302 - 21686.458: 93.9266% ( 53) 00:14:51.830 21686.458 - 21805.615: 94.5109% ( 43) 00:14:51.830 21805.615 - 21924.771: 94.9728% ( 34) 00:14:51.830 21924.771 - 22043.927: 95.4484% ( 35) 00:14:51.830 22043.927 - 22163.084: 95.7880% ( 25) 00:14:51.830 22163.084 - 22282.240: 96.1005% ( 23) 00:14:51.830 22282.240 - 22401.396: 96.3179% ( 16) 00:14:51.830 22401.396 - 22520.553: 96.5082% ( 14) 00:14:51.830 22520.553 - 22639.709: 96.6712% ( 12) 00:14:51.830 22639.709 - 22758.865: 96.8750% ( 15) 00:14:51.831 22758.865 - 22878.022: 96.9701% ( 7) 00:14:51.831 22878.022 - 22997.178: 97.1060% ( 10) 00:14:51.831 22997.178 - 23116.335: 97.2011% ( 7) 00:14:51.831 23116.335 - 23235.491: 97.3234% ( 9) 00:14:51.831 23235.491 - 23354.647: 97.4592% ( 10) 00:14:51.831 23354.647 - 23473.804: 97.5815% ( 9) 00:14:51.831 23473.804 - 23592.960: 97.7038% ( 9) 00:14:51.831 23592.960 - 23712.116: 97.7853% ( 6) 00:14:51.831 23712.116 - 23831.273: 97.8533% ( 5) 00:14:51.831 23831.273 - 23950.429: 97.9212% ( 5) 00:14:51.831 23950.429 - 24069.585: 97.9891% ( 5) 00:14:51.831 24069.585 - 24188.742: 98.0435% ( 4) 00:14:51.831 24188.742 - 24307.898: 98.1114% ( 5) 00:14:51.831 24307.898 - 24427.055: 98.1658% ( 4) 00:14:51.831 24427.055 - 24546.211: 98.2201% ( 4) 00:14:51.831 24546.211 - 24665.367: 98.2609% ( 3) 00:14:51.831 35270.284 - 35508.596: 98.3288% ( 5) 00:14:51.831 35508.596 - 35746.909: 98.3832% ( 4) 00:14:51.831 35746.909 - 35985.222: 98.4375% ( 4) 00:14:51.831 35985.222 - 36223.535: 98.5054% ( 5) 00:14:51.831 36223.535 - 36461.847: 98.5734% ( 5) 00:14:51.831 36461.847 - 36700.160: 98.6413% ( 5) 00:14:51.831 36700.160 - 36938.473: 98.7092% ( 5) 00:14:51.831 36938.473 - 37176.785: 98.7636% ( 4) 00:14:51.831 37176.785 - 37415.098: 98.8315% ( 5) 00:14:51.831 37415.098 - 37653.411: 98.8995% ( 5) 00:14:51.831 37653.411 - 37891.724: 98.9674% ( 5) 00:14:51.831 37891.724 - 38130.036: 99.0217% ( 4) 00:14:51.831 38130.036 - 38368.349: 99.0897% ( 5) 00:14:51.831 38368.349 - 38606.662: 99.1304% ( 3) 00:14:51.831 44564.480 - 44802.793: 99.1576% ( 2) 00:14:51.831 44802.793 - 45041.105: 99.2120% ( 4) 00:14:51.831 45041.105 - 45279.418: 99.2799% ( 5) 00:14:51.831 45279.418 - 45517.731: 99.3342% ( 4) 00:14:51.831 45517.731 - 45756.044: 99.3886% ( 4) 00:14:51.831 45756.044 - 45994.356: 99.4429% ( 4) 00:14:51.831 45994.356 - 46232.669: 99.5109% ( 5) 00:14:51.831 46232.669 - 46470.982: 99.5652% ( 4) 00:14:51.831 46470.982 - 46709.295: 99.6332% ( 5) 00:14:51.831 46709.295 - 46947.607: 99.6875% ( 4) 00:14:51.831 46947.607 - 47185.920: 99.7554% ( 5) 00:14:51.831 47185.920 - 47424.233: 99.8234% ( 5) 00:14:51.831 47424.233 - 47662.545: 99.8777% ( 4) 00:14:51.831 47662.545 - 47900.858: 99.9457% ( 5) 00:14:51.831 47900.858 - 48139.171: 100.0000% ( 4) 00:14:51.831 00:14:51.831 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:51.831 ============================================================================== 00:14:51.831 Range in us Cumulative IO count 00:14:51.831 9889.978 - 9949.556: 0.0408% ( 3) 00:14:51.831 9949.556 - 10009.135: 0.0815% ( 3) 00:14:51.831 10009.135 - 10068.713: 0.1495% ( 5) 00:14:51.831 10068.713 - 10128.291: 0.2717% ( 9) 00:14:51.831 10128.291 - 10187.869: 0.3668% ( 7) 00:14:51.831 10187.869 - 10247.447: 0.4891% ( 9) 00:14:51.831 10247.447 - 10307.025: 0.5842% ( 7) 00:14:51.831 10307.025 - 10366.604: 0.6658% ( 6) 00:14:51.831 10366.604 - 10426.182: 0.7337% ( 5) 00:14:51.831 10426.182 - 10485.760: 0.8152% ( 6) 00:14:51.831 10485.760 - 10545.338: 0.9103% ( 7) 00:14:51.831 10545.338 - 10604.916: 1.0190% ( 8) 00:14:51.831 10604.916 - 10664.495: 1.1005% ( 6) 00:14:51.831 10664.495 - 10724.073: 1.1821% ( 6) 00:14:51.831 10724.073 - 10783.651: 1.2772% ( 7) 00:14:51.831 10783.651 - 10843.229: 1.3451% ( 5) 00:14:51.831 10843.229 - 10902.807: 1.4810% ( 10) 00:14:51.831 10902.807 - 10962.385: 1.6440% ( 12) 00:14:51.831 10962.385 - 11021.964: 1.7527% ( 8) 00:14:51.831 11021.964 - 11081.542: 1.8886% ( 10) 00:14:51.831 11081.542 - 11141.120: 2.0652% ( 13) 00:14:51.831 11141.120 - 11200.698: 2.2962% ( 17) 00:14:51.831 11200.698 - 11260.276: 2.6087% ( 23) 00:14:51.831 11260.276 - 11319.855: 2.9212% ( 23) 00:14:51.831 11319.855 - 11379.433: 3.1929% ( 20) 00:14:51.831 11379.433 - 11439.011: 3.4783% ( 21) 00:14:51.831 11439.011 - 11498.589: 3.8179% ( 25) 00:14:51.831 11498.589 - 11558.167: 4.1168% ( 22) 00:14:51.831 11558.167 - 11617.745: 4.4565% ( 25) 00:14:51.831 11617.745 - 11677.324: 4.7418% ( 21) 00:14:51.831 11677.324 - 11736.902: 5.0543% ( 23) 00:14:51.831 11736.902 - 11796.480: 5.3533% ( 22) 00:14:51.831 11796.480 - 11856.058: 5.7201% ( 27) 00:14:51.831 11856.058 - 11915.636: 6.0598% ( 25) 00:14:51.831 11915.636 - 11975.215: 6.4266% ( 27) 00:14:51.831 11975.215 - 12034.793: 6.8614% ( 32) 00:14:51.831 12034.793 - 12094.371: 7.2011% ( 25) 00:14:51.831 12094.371 - 12153.949: 7.5679% ( 27) 00:14:51.831 12153.949 - 12213.527: 7.9755% ( 30) 00:14:51.831 12213.527 - 12273.105: 8.4647% ( 36) 00:14:51.831 12273.105 - 12332.684: 8.9130% ( 33) 00:14:51.831 12332.684 - 12392.262: 9.3071% ( 29) 00:14:51.831 12392.262 - 12451.840: 9.6060% ( 22) 00:14:51.831 12451.840 - 12511.418: 9.9185% ( 23) 00:14:51.831 12511.418 - 12570.996: 10.4212% ( 37) 00:14:51.831 12570.996 - 12630.575: 10.9239% ( 37) 00:14:51.831 12630.575 - 12690.153: 11.4946% ( 42) 00:14:51.831 12690.153 - 12749.731: 11.9701% ( 35) 00:14:51.831 12749.731 - 12809.309: 12.5815% ( 45) 00:14:51.831 12809.309 - 12868.887: 13.1114% ( 39) 00:14:51.831 12868.887 - 12928.465: 13.6005% ( 36) 00:14:51.831 12928.465 - 12988.044: 14.1033% ( 37) 00:14:51.831 12988.044 - 13047.622: 14.5924% ( 36) 00:14:51.831 13047.622 - 13107.200: 15.0272% ( 32) 00:14:51.831 13107.200 - 13166.778: 15.5435% ( 38) 00:14:51.831 13166.778 - 13226.356: 16.1005% ( 41) 00:14:51.831 13226.356 - 13285.935: 16.7255% ( 46) 00:14:51.831 13285.935 - 13345.513: 17.5000% ( 57) 00:14:51.831 13345.513 - 13405.091: 18.1929% ( 51) 00:14:51.831 13405.091 - 13464.669: 18.8723% ( 50) 00:14:51.831 13464.669 - 13524.247: 19.4701% ( 44) 00:14:51.831 13524.247 - 13583.825: 20.0000% ( 39) 00:14:51.831 13583.825 - 13643.404: 20.4755% ( 35) 00:14:51.831 13643.404 - 13702.982: 20.9783% ( 37) 00:14:51.831 13702.982 - 13762.560: 21.4810% ( 37) 00:14:51.831 13762.560 - 13822.138: 21.9837% ( 37) 00:14:51.831 13822.138 - 13881.716: 22.4185% ( 32) 00:14:51.831 13881.716 - 13941.295: 22.8397% ( 31) 00:14:51.831 13941.295 - 14000.873: 23.2745% ( 32) 00:14:51.831 14000.873 - 14060.451: 23.7772% ( 37) 00:14:51.831 14060.451 - 14120.029: 24.4293% ( 48) 00:14:51.831 14120.029 - 14179.607: 25.0000% ( 42) 00:14:51.831 14179.607 - 14239.185: 25.6114% ( 45) 00:14:51.831 14239.185 - 14298.764: 26.1821% ( 42) 00:14:51.831 14298.764 - 14358.342: 26.6848% ( 37) 00:14:51.831 14358.342 - 14417.920: 27.1739% ( 36) 00:14:51.831 14417.920 - 14477.498: 27.6902% ( 38) 00:14:51.831 14477.498 - 14537.076: 28.1793% ( 36) 00:14:51.831 14537.076 - 14596.655: 28.6685% ( 36) 00:14:51.831 14596.655 - 14656.233: 29.1440% ( 35) 00:14:51.831 14656.233 - 14715.811: 29.5380% ( 29) 00:14:51.831 14715.811 - 14775.389: 30.0951% ( 41) 00:14:51.831 14775.389 - 14834.967: 30.6114% ( 38) 00:14:51.831 14834.967 - 14894.545: 31.2772% ( 49) 00:14:51.831 14894.545 - 14954.124: 31.8614% ( 43) 00:14:51.831 14954.124 - 15013.702: 32.3913% ( 39) 00:14:51.831 15013.702 - 15073.280: 32.9348% ( 40) 00:14:51.831 15073.280 - 15132.858: 33.3967% ( 34) 00:14:51.831 15132.858 - 15192.436: 33.9538% ( 41) 00:14:51.831 15192.436 - 15252.015: 34.4973% ( 40) 00:14:51.831 15252.015 - 15371.171: 35.5435% ( 77) 00:14:51.831 15371.171 - 15490.327: 36.6168% ( 79) 00:14:51.831 15490.327 - 15609.484: 37.5951% ( 72) 00:14:51.831 15609.484 - 15728.640: 38.4647% ( 64) 00:14:51.831 15728.640 - 15847.796: 39.3207% ( 63) 00:14:51.831 15847.796 - 15966.953: 40.1495% ( 61) 00:14:51.831 15966.953 - 16086.109: 41.0598% ( 67) 00:14:51.831 16086.109 - 16205.265: 41.9837% ( 68) 00:14:51.831 16205.265 - 16324.422: 42.7853% ( 59) 00:14:51.831 16324.422 - 16443.578: 43.4918% ( 52) 00:14:51.831 16443.578 - 16562.735: 44.1440% ( 48) 00:14:51.831 16562.735 - 16681.891: 44.8505% ( 52) 00:14:51.831 16681.891 - 16801.047: 45.7065% ( 63) 00:14:51.831 16801.047 - 16920.204: 46.5217% ( 60) 00:14:51.831 16920.204 - 17039.360: 47.2554% ( 54) 00:14:51.831 17039.360 - 17158.516: 48.0707% ( 60) 00:14:51.831 17158.516 - 17277.673: 48.8043% ( 54) 00:14:51.831 17277.673 - 17396.829: 49.4701% ( 49) 00:14:51.831 17396.829 - 17515.985: 50.1223% ( 48) 00:14:51.831 17515.985 - 17635.142: 50.8832% ( 56) 00:14:51.831 17635.142 - 17754.298: 51.8750% ( 73) 00:14:51.831 17754.298 - 17873.455: 52.8125% ( 69) 00:14:51.831 17873.455 - 17992.611: 53.7364% ( 68) 00:14:51.831 17992.611 - 18111.767: 54.6739% ( 69) 00:14:51.831 18111.767 - 18230.924: 55.6386% ( 71) 00:14:51.831 18230.924 - 18350.080: 56.7391% ( 81) 00:14:51.831 18350.080 - 18469.236: 57.8940% ( 85) 00:14:51.831 18469.236 - 18588.393: 59.0082% ( 82) 00:14:51.831 18588.393 - 18707.549: 60.3533% ( 99) 00:14:51.831 18707.549 - 18826.705: 61.9701% ( 119) 00:14:51.831 18826.705 - 18945.862: 63.3560% ( 102) 00:14:51.831 18945.862 - 19065.018: 64.7826% ( 105) 00:14:51.831 19065.018 - 19184.175: 66.2364% ( 107) 00:14:51.831 19184.175 - 19303.331: 67.8533% ( 119) 00:14:51.831 19303.331 - 19422.487: 69.5109% ( 122) 00:14:51.831 19422.487 - 19541.644: 71.1005% ( 117) 00:14:51.831 19541.644 - 19660.800: 72.6902% ( 117) 00:14:51.831 19660.800 - 19779.956: 74.2935% ( 118) 00:14:51.831 19779.956 - 19899.113: 75.9103% ( 119) 00:14:51.831 19899.113 - 20018.269: 77.4185% ( 111) 00:14:51.831 20018.269 - 20137.425: 78.8995% ( 109) 00:14:51.831 20137.425 - 20256.582: 80.3261% ( 105) 00:14:51.832 20256.582 - 20375.738: 81.7935% ( 108) 00:14:51.832 20375.738 - 20494.895: 83.1386% ( 99) 00:14:51.832 20494.895 - 20614.051: 84.5245% ( 102) 00:14:51.832 20614.051 - 20733.207: 85.7201% ( 88) 00:14:51.832 20733.207 - 20852.364: 86.9701% ( 92) 00:14:51.832 20852.364 - 20971.520: 88.0571% ( 80) 00:14:51.832 20971.520 - 21090.676: 89.1033% ( 77) 00:14:51.832 21090.676 - 21209.833: 90.0272% ( 68) 00:14:51.832 21209.833 - 21328.989: 90.8696% ( 62) 00:14:51.832 21328.989 - 21448.145: 91.7255% ( 63) 00:14:51.832 21448.145 - 21567.302: 92.5000% ( 57) 00:14:51.832 21567.302 - 21686.458: 93.1793% ( 50) 00:14:51.832 21686.458 - 21805.615: 93.7228% ( 40) 00:14:51.832 21805.615 - 21924.771: 94.1168% ( 29) 00:14:51.832 21924.771 - 22043.927: 94.5652% ( 33) 00:14:51.832 22043.927 - 22163.084: 94.9185% ( 26) 00:14:51.832 22163.084 - 22282.240: 95.3125% ( 29) 00:14:51.832 22282.240 - 22401.396: 95.5978% ( 21) 00:14:51.832 22401.396 - 22520.553: 95.8424% ( 18) 00:14:51.832 22520.553 - 22639.709: 96.0734% ( 17) 00:14:51.832 22639.709 - 22758.865: 96.2228% ( 11) 00:14:51.832 22758.865 - 22878.022: 96.3315% ( 8) 00:14:51.832 22878.022 - 22997.178: 96.4266% ( 7) 00:14:51.832 22997.178 - 23116.335: 96.5489% ( 9) 00:14:51.832 23116.335 - 23235.491: 96.6712% ( 9) 00:14:51.832 23235.491 - 23354.647: 96.7935% ( 9) 00:14:51.832 23354.647 - 23473.804: 96.9158% ( 9) 00:14:51.832 23473.804 - 23592.960: 97.0380% ( 9) 00:14:51.832 23592.960 - 23712.116: 97.1467% ( 8) 00:14:51.832 23712.116 - 23831.273: 97.2690% ( 9) 00:14:51.832 23831.273 - 23950.429: 97.3777% ( 8) 00:14:51.832 23950.429 - 24069.585: 97.4864% ( 8) 00:14:51.832 24069.585 - 24188.742: 97.5815% ( 7) 00:14:51.832 24188.742 - 24307.898: 97.7038% ( 9) 00:14:51.832 24307.898 - 24427.055: 97.7989% ( 7) 00:14:51.832 24427.055 - 24546.211: 97.9212% ( 9) 00:14:51.832 24546.211 - 24665.367: 98.0299% ( 8) 00:14:51.832 24665.367 - 24784.524: 98.1386% ( 8) 00:14:51.832 24784.524 - 24903.680: 98.2201% ( 6) 00:14:51.832 24903.680 - 25022.836: 98.2609% ( 3) 00:14:51.832 31695.593 - 31933.905: 98.3152% ( 4) 00:14:51.832 31933.905 - 32172.218: 98.3696% ( 4) 00:14:51.832 32172.218 - 32410.531: 98.4375% ( 5) 00:14:51.832 32410.531 - 32648.844: 98.4918% ( 4) 00:14:51.832 32648.844 - 32887.156: 98.5598% ( 5) 00:14:51.832 32887.156 - 33125.469: 98.6141% ( 4) 00:14:51.832 33125.469 - 33363.782: 98.6821% ( 5) 00:14:51.832 33363.782 - 33602.095: 98.7500% ( 5) 00:14:51.832 33602.095 - 33840.407: 98.8179% ( 5) 00:14:51.832 33840.407 - 34078.720: 98.8859% ( 5) 00:14:51.832 34078.720 - 34317.033: 98.9538% ( 5) 00:14:51.832 34317.033 - 34555.345: 99.0082% ( 4) 00:14:51.832 34555.345 - 34793.658: 99.0761% ( 5) 00:14:51.832 34793.658 - 35031.971: 99.1304% ( 4) 00:14:51.832 40989.789 - 41228.102: 99.1712% ( 3) 00:14:51.832 41228.102 - 41466.415: 99.2255% ( 4) 00:14:51.832 41466.415 - 41704.727: 99.2799% ( 4) 00:14:51.832 41704.727 - 41943.040: 99.3478% ( 5) 00:14:51.832 41943.040 - 42181.353: 99.3886% ( 3) 00:14:51.832 42181.353 - 42419.665: 99.4565% ( 5) 00:14:51.832 42419.665 - 42657.978: 99.5245% ( 5) 00:14:51.832 42657.978 - 42896.291: 99.5788% ( 4) 00:14:51.832 42896.291 - 43134.604: 99.6332% ( 4) 00:14:51.832 43134.604 - 43372.916: 99.6875% ( 4) 00:14:51.832 43372.916 - 43611.229: 99.7554% ( 5) 00:14:51.832 43611.229 - 43849.542: 99.8098% ( 4) 00:14:51.832 43849.542 - 44087.855: 99.8777% ( 5) 00:14:51.832 44087.855 - 44326.167: 99.9321% ( 4) 00:14:51.832 44326.167 - 44564.480: 100.0000% ( 5) 00:14:51.832 00:14:51.832 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:51.832 ============================================================================== 00:14:51.832 Range in us Cumulative IO count 00:14:51.832 9949.556 - 10009.135: 0.0136% ( 1) 00:14:51.832 10009.135 - 10068.713: 0.1087% ( 7) 00:14:51.832 10068.713 - 10128.291: 0.2446% ( 10) 00:14:51.832 10128.291 - 10187.869: 0.4212% ( 13) 00:14:51.832 10187.869 - 10247.447: 0.5842% ( 12) 00:14:51.832 10247.447 - 10307.025: 0.7473% ( 12) 00:14:51.832 10307.025 - 10366.604: 0.8967% ( 11) 00:14:51.832 10366.604 - 10426.182: 1.0870% ( 14) 00:14:51.832 10426.182 - 10485.760: 1.2636% ( 13) 00:14:51.832 10485.760 - 10545.338: 1.4402% ( 13) 00:14:51.832 10545.338 - 10604.916: 1.5625% ( 9) 00:14:51.832 10604.916 - 10664.495: 1.6712% ( 8) 00:14:51.832 10664.495 - 10724.073: 1.7799% ( 8) 00:14:51.832 10724.073 - 10783.651: 1.8886% ( 8) 00:14:51.832 10783.651 - 10843.229: 2.0245% ( 10) 00:14:51.832 10843.229 - 10902.807: 2.1467% ( 9) 00:14:51.832 10902.807 - 10962.385: 2.2554% ( 8) 00:14:51.832 10962.385 - 11021.964: 2.6087% ( 26) 00:14:51.832 11021.964 - 11081.542: 2.8533% ( 18) 00:14:51.832 11081.542 - 11141.120: 3.0842% ( 17) 00:14:51.832 11141.120 - 11200.698: 3.3288% ( 18) 00:14:51.832 11200.698 - 11260.276: 3.6413% ( 23) 00:14:51.832 11260.276 - 11319.855: 3.9810% ( 25) 00:14:51.832 11319.855 - 11379.433: 4.2935% ( 23) 00:14:51.832 11379.433 - 11439.011: 4.5245% ( 17) 00:14:51.832 11439.011 - 11498.589: 4.7826% ( 19) 00:14:51.832 11498.589 - 11558.167: 5.0543% ( 20) 00:14:51.832 11558.167 - 11617.745: 5.4484% ( 29) 00:14:51.832 11617.745 - 11677.324: 5.8967% ( 33) 00:14:51.832 11677.324 - 11736.902: 6.3723% ( 35) 00:14:51.832 11736.902 - 11796.480: 6.8207% ( 33) 00:14:51.832 11796.480 - 11856.058: 7.2554% ( 32) 00:14:51.832 11856.058 - 11915.636: 7.6359% ( 28) 00:14:51.832 11915.636 - 11975.215: 8.0299% ( 29) 00:14:51.832 11975.215 - 12034.793: 8.4375% ( 30) 00:14:51.832 12034.793 - 12094.371: 8.8315% ( 29) 00:14:51.832 12094.371 - 12153.949: 9.2391% ( 30) 00:14:51.832 12153.949 - 12213.527: 9.7011% ( 34) 00:14:51.832 12213.527 - 12273.105: 10.1223% ( 31) 00:14:51.832 12273.105 - 12332.684: 10.4484% ( 24) 00:14:51.832 12332.684 - 12392.262: 10.6793% ( 17) 00:14:51.832 12392.262 - 12451.840: 11.0326% ( 26) 00:14:51.832 12451.840 - 12511.418: 11.3723% ( 25) 00:14:51.832 12511.418 - 12570.996: 11.6848% ( 23) 00:14:51.832 12570.996 - 12630.575: 11.9293% ( 18) 00:14:51.832 12630.575 - 12690.153: 12.1739% ( 18) 00:14:51.832 12690.153 - 12749.731: 12.4592% ( 21) 00:14:51.832 12749.731 - 12809.309: 12.7446% ( 21) 00:14:51.832 12809.309 - 12868.887: 13.1386% ( 29) 00:14:51.832 12868.887 - 12928.465: 13.5054% ( 27) 00:14:51.832 12928.465 - 12988.044: 13.8859% ( 28) 00:14:51.832 12988.044 - 13047.622: 14.2120% ( 24) 00:14:51.832 13047.622 - 13107.200: 14.6603% ( 33) 00:14:51.832 13107.200 - 13166.778: 15.1495% ( 36) 00:14:51.832 13166.778 - 13226.356: 15.5978% ( 33) 00:14:51.832 13226.356 - 13285.935: 16.0734% ( 35) 00:14:51.832 13285.935 - 13345.513: 16.6168% ( 40) 00:14:51.832 13345.513 - 13405.091: 17.1196% ( 37) 00:14:51.832 13405.091 - 13464.669: 17.6087% ( 36) 00:14:51.832 13464.669 - 13524.247: 18.0163% ( 30) 00:14:51.832 13524.247 - 13583.825: 18.4239% ( 30) 00:14:51.832 13583.825 - 13643.404: 18.8451% ( 31) 00:14:51.832 13643.404 - 13702.982: 19.2799% ( 32) 00:14:51.832 13702.982 - 13762.560: 19.7147% ( 32) 00:14:51.832 13762.560 - 13822.138: 20.2174% ( 37) 00:14:51.832 13822.138 - 13881.716: 20.7745% ( 41) 00:14:51.832 13881.716 - 13941.295: 21.3723% ( 44) 00:14:51.832 13941.295 - 14000.873: 21.9429% ( 42) 00:14:51.832 14000.873 - 14060.451: 22.4321% ( 36) 00:14:51.832 14060.451 - 14120.029: 22.8668% ( 32) 00:14:51.832 14120.029 - 14179.607: 23.3967% ( 39) 00:14:51.832 14179.607 - 14239.185: 24.0489% ( 48) 00:14:51.832 14239.185 - 14298.764: 24.6467% ( 44) 00:14:51.832 14298.764 - 14358.342: 25.2582% ( 45) 00:14:51.832 14358.342 - 14417.920: 25.7880% ( 39) 00:14:51.832 14417.920 - 14477.498: 26.3315% ( 40) 00:14:51.832 14477.498 - 14537.076: 26.8207% ( 36) 00:14:51.832 14537.076 - 14596.655: 27.2962% ( 35) 00:14:51.832 14596.655 - 14656.233: 27.7582% ( 34) 00:14:51.832 14656.233 - 14715.811: 28.3016% ( 40) 00:14:51.832 14715.811 - 14775.389: 28.7908% ( 36) 00:14:51.832 14775.389 - 14834.967: 29.3342% ( 40) 00:14:51.832 14834.967 - 14894.545: 29.8777% ( 40) 00:14:51.832 14894.545 - 14954.124: 30.4755% ( 44) 00:14:51.832 14954.124 - 15013.702: 31.1413% ( 49) 00:14:51.832 15013.702 - 15073.280: 31.7799% ( 47) 00:14:51.832 15073.280 - 15132.858: 32.3913% ( 45) 00:14:51.832 15132.858 - 15192.436: 33.0299% ( 47) 00:14:51.832 15192.436 - 15252.015: 33.6005% ( 42) 00:14:51.832 15252.015 - 15371.171: 34.7826% ( 87) 00:14:51.832 15371.171 - 15490.327: 35.9918% ( 89) 00:14:51.832 15490.327 - 15609.484: 37.1196% ( 83) 00:14:51.832 15609.484 - 15728.640: 38.2337% ( 82) 00:14:51.832 15728.640 - 15847.796: 39.2799% ( 77) 00:14:51.832 15847.796 - 15966.953: 40.3940% ( 82) 00:14:51.832 15966.953 - 16086.109: 41.4402% ( 77) 00:14:51.832 16086.109 - 16205.265: 42.4049% ( 71) 00:14:51.832 16205.265 - 16324.422: 43.2880% ( 65) 00:14:51.832 16324.422 - 16443.578: 43.9810% ( 51) 00:14:51.832 16443.578 - 16562.735: 44.7418% ( 56) 00:14:51.832 16562.735 - 16681.891: 45.4891% ( 55) 00:14:51.832 16681.891 - 16801.047: 46.3179% ( 61) 00:14:51.832 16801.047 - 16920.204: 47.0245% ( 52) 00:14:51.833 16920.204 - 17039.360: 47.5951% ( 42) 00:14:51.833 17039.360 - 17158.516: 48.3696% ( 57) 00:14:51.833 17158.516 - 17277.673: 49.0217% ( 48) 00:14:51.833 17277.673 - 17396.829: 49.7418% ( 53) 00:14:51.833 17396.829 - 17515.985: 50.4484% ( 52) 00:14:51.833 17515.985 - 17635.142: 51.3315% ( 65) 00:14:51.833 17635.142 - 17754.298: 52.2826% ( 70) 00:14:51.833 17754.298 - 17873.455: 53.0978% ( 60) 00:14:51.833 17873.455 - 17992.611: 54.0489% ( 70) 00:14:51.833 17992.611 - 18111.767: 55.1902% ( 84) 00:14:51.833 18111.767 - 18230.924: 56.4266% ( 91) 00:14:51.833 18230.924 - 18350.080: 57.4864% ( 78) 00:14:51.833 18350.080 - 18469.236: 58.6005% ( 82) 00:14:51.833 18469.236 - 18588.393: 59.6603% ( 78) 00:14:51.833 18588.393 - 18707.549: 60.9511% ( 95) 00:14:51.833 18707.549 - 18826.705: 62.2283% ( 94) 00:14:51.833 18826.705 - 18945.862: 63.6549% ( 105) 00:14:51.833 18945.862 - 19065.018: 65.0136% ( 100) 00:14:51.833 19065.018 - 19184.175: 66.5489% ( 113) 00:14:51.833 19184.175 - 19303.331: 68.0707% ( 112) 00:14:51.833 19303.331 - 19422.487: 69.5788% ( 111) 00:14:51.833 19422.487 - 19541.644: 71.0462% ( 108) 00:14:51.833 19541.644 - 19660.800: 72.5136% ( 108) 00:14:51.833 19660.800 - 19779.956: 73.9674% ( 107) 00:14:51.833 19779.956 - 19899.113: 75.3668% ( 103) 00:14:51.833 19899.113 - 20018.269: 76.9293% ( 115) 00:14:51.833 20018.269 - 20137.425: 78.5462% ( 119) 00:14:51.833 20137.425 - 20256.582: 79.9185% ( 101) 00:14:51.833 20256.582 - 20375.738: 81.1413% ( 90) 00:14:51.833 20375.738 - 20494.895: 82.4321% ( 95) 00:14:51.833 20494.895 - 20614.051: 83.6685% ( 91) 00:14:51.833 20614.051 - 20733.207: 84.9592% ( 95) 00:14:51.833 20733.207 - 20852.364: 86.2092% ( 92) 00:14:51.833 20852.364 - 20971.520: 87.3777% ( 86) 00:14:51.833 20971.520 - 21090.676: 88.4783% ( 81) 00:14:51.833 21090.676 - 21209.833: 89.5516% ( 79) 00:14:51.833 21209.833 - 21328.989: 90.4891% ( 69) 00:14:51.833 21328.989 - 21448.145: 91.4538% ( 71) 00:14:51.833 21448.145 - 21567.302: 92.3098% ( 63) 00:14:51.833 21567.302 - 21686.458: 93.0299% ( 53) 00:14:51.833 21686.458 - 21805.615: 93.6685% ( 47) 00:14:51.833 21805.615 - 21924.771: 94.1168% ( 33) 00:14:51.833 21924.771 - 22043.927: 94.5788% ( 34) 00:14:51.833 22043.927 - 22163.084: 94.9864% ( 30) 00:14:51.833 22163.084 - 22282.240: 95.2989% ( 23) 00:14:51.833 22282.240 - 22401.396: 95.5842% ( 21) 00:14:51.833 22401.396 - 22520.553: 95.8967% ( 23) 00:14:51.833 22520.553 - 22639.709: 96.0870% ( 14) 00:14:51.833 22639.709 - 22758.865: 96.2364% ( 11) 00:14:51.833 22758.865 - 22878.022: 96.3859% ( 11) 00:14:51.833 22878.022 - 22997.178: 96.5489% ( 12) 00:14:51.833 22997.178 - 23116.335: 96.7120% ( 12) 00:14:51.833 23116.335 - 23235.491: 96.8478% ( 10) 00:14:51.833 23235.491 - 23354.647: 96.9565% ( 8) 00:14:51.833 23354.647 - 23473.804: 97.0652% ( 8) 00:14:51.833 23473.804 - 23592.960: 97.1603% ( 7) 00:14:51.833 23592.960 - 23712.116: 97.2826% ( 9) 00:14:51.833 23712.116 - 23831.273: 97.4185% ( 10) 00:14:51.833 23831.273 - 23950.429: 97.5408% ( 9) 00:14:51.833 23950.429 - 24069.585: 97.6766% ( 10) 00:14:51.833 24069.585 - 24188.742: 97.7853% ( 8) 00:14:51.833 24188.742 - 24307.898: 97.8940% ( 8) 00:14:51.833 24307.898 - 24427.055: 97.9755% ( 6) 00:14:51.833 24427.055 - 24546.211: 98.0571% ( 6) 00:14:51.833 24546.211 - 24665.367: 98.1114% ( 4) 00:14:51.833 24665.367 - 24784.524: 98.1386% ( 2) 00:14:51.833 24784.524 - 24903.680: 98.1793% ( 3) 00:14:51.833 24903.680 - 25022.836: 98.2201% ( 3) 00:14:51.833 25022.836 - 25141.993: 98.2473% ( 2) 00:14:51.834 25141.993 - 25261.149: 98.2609% ( 1) 00:14:51.834 27525.120 - 27644.276: 98.3016% ( 3) 00:14:51.834 27644.276 - 27763.433: 98.3288% ( 2) 00:14:51.834 27763.433 - 27882.589: 98.3560% ( 2) 00:14:51.834 27882.589 - 28001.745: 98.3832% ( 2) 00:14:51.834 28001.745 - 28120.902: 98.4239% ( 3) 00:14:51.834 28120.902 - 28240.058: 98.4511% ( 2) 00:14:51.834 28240.058 - 28359.215: 98.4783% ( 2) 00:14:51.834 28359.215 - 28478.371: 98.5190% ( 3) 00:14:51.834 28478.371 - 28597.527: 98.5462% ( 2) 00:14:51.834 28597.527 - 28716.684: 98.5870% ( 3) 00:14:51.834 28716.684 - 28835.840: 98.6141% ( 2) 00:14:51.834 28835.840 - 28954.996: 98.6413% ( 2) 00:14:51.834 28954.996 - 29074.153: 98.6685% ( 2) 00:14:51.834 29074.153 - 29193.309: 98.7092% ( 3) 00:14:51.834 29193.309 - 29312.465: 98.7364% ( 2) 00:14:51.834 29312.465 - 29431.622: 98.7772% ( 3) 00:14:51.834 29431.622 - 29550.778: 98.8043% ( 2) 00:14:51.834 29550.778 - 29669.935: 98.8315% ( 2) 00:14:51.834 29669.935 - 29789.091: 98.8723% ( 3) 00:14:51.834 29789.091 - 29908.247: 98.8995% ( 2) 00:14:51.834 29908.247 - 30027.404: 98.9402% ( 3) 00:14:51.834 30027.404 - 30146.560: 98.9674% ( 2) 00:14:51.834 30146.560 - 30265.716: 98.9946% ( 2) 00:14:51.834 30265.716 - 30384.873: 99.0353% ( 3) 00:14:51.834 30384.873 - 30504.029: 99.0625% ( 2) 00:14:51.834 30504.029 - 30742.342: 99.1304% ( 5) 00:14:51.834 36938.473 - 37176.785: 99.1848% ( 4) 00:14:51.834 37176.785 - 37415.098: 99.2391% ( 4) 00:14:51.834 37415.098 - 37653.411: 99.3071% ( 5) 00:14:51.834 37653.411 - 37891.724: 99.3614% ( 4) 00:14:51.834 37891.724 - 38130.036: 99.4293% ( 5) 00:14:51.834 38130.036 - 38368.349: 99.4837% ( 4) 00:14:51.834 38368.349 - 38606.662: 99.5516% ( 5) 00:14:51.834 38606.662 - 38844.975: 99.6060% ( 4) 00:14:51.834 38844.975 - 39083.287: 99.6467% ( 3) 00:14:51.834 39083.287 - 39321.600: 99.7011% ( 4) 00:14:51.834 39321.600 - 39559.913: 99.7554% ( 4) 00:14:51.834 39559.913 - 39798.225: 99.8234% ( 5) 00:14:51.834 39798.225 - 40036.538: 99.8777% ( 4) 00:14:51.834 40036.538 - 40274.851: 99.9457% ( 5) 00:14:51.834 40274.851 - 40513.164: 100.0000% ( 4) 00:14:51.834 00:14:51.834 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:51.834 ============================================================================== 00:14:51.834 Range in us Cumulative IO count 00:14:51.834 10068.713 - 10128.291: 0.0408% ( 3) 00:14:51.834 10128.291 - 10187.869: 0.1223% ( 6) 00:14:51.834 10187.869 - 10247.447: 0.4076% ( 21) 00:14:51.834 10247.447 - 10307.025: 0.7201% ( 23) 00:14:51.834 10307.025 - 10366.604: 0.9511% ( 17) 00:14:51.834 10366.604 - 10426.182: 1.1685% ( 16) 00:14:51.834 10426.182 - 10485.760: 1.3451% ( 13) 00:14:51.834 10485.760 - 10545.338: 1.5489% ( 15) 00:14:51.834 10545.338 - 10604.916: 1.7255% ( 13) 00:14:51.834 10604.916 - 10664.495: 1.8750% ( 11) 00:14:51.834 10664.495 - 10724.073: 2.0245% ( 11) 00:14:51.834 10724.073 - 10783.651: 2.1739% ( 11) 00:14:51.834 10783.651 - 10843.229: 2.3641% ( 14) 00:14:51.834 10843.229 - 10902.807: 2.5136% ( 11) 00:14:51.834 10902.807 - 10962.385: 2.6902% ( 13) 00:14:51.834 10962.385 - 11021.964: 2.9755% ( 21) 00:14:51.834 11021.964 - 11081.542: 3.3016% ( 24) 00:14:51.834 11081.542 - 11141.120: 3.7636% ( 34) 00:14:51.834 11141.120 - 11200.698: 4.0353% ( 20) 00:14:51.834 11200.698 - 11260.276: 4.3614% ( 24) 00:14:51.834 11260.276 - 11319.855: 4.5924% ( 17) 00:14:51.834 11319.855 - 11379.433: 4.8370% ( 18) 00:14:51.834 11379.433 - 11439.011: 5.0408% ( 15) 00:14:51.834 11439.011 - 11498.589: 5.2989% ( 19) 00:14:51.834 11498.589 - 11558.167: 5.5842% ( 21) 00:14:51.834 11558.167 - 11617.745: 5.9918% ( 30) 00:14:51.834 11617.745 - 11677.324: 6.3587% ( 27) 00:14:51.834 11677.324 - 11736.902: 6.6848% ( 24) 00:14:51.834 11736.902 - 11796.480: 6.9565% ( 20) 00:14:51.834 11796.480 - 11856.058: 7.2826% ( 24) 00:14:51.834 11856.058 - 11915.636: 7.7310% ( 33) 00:14:51.834 11915.636 - 11975.215: 8.3288% ( 44) 00:14:51.834 11975.215 - 12034.793: 8.7364% ( 30) 00:14:51.834 12034.793 - 12094.371: 9.1304% ( 29) 00:14:51.834 12094.371 - 12153.949: 9.5245% ( 29) 00:14:51.834 12153.949 - 12213.527: 9.8370% ( 23) 00:14:51.834 12213.527 - 12273.105: 10.1630% ( 24) 00:14:51.834 12273.105 - 12332.684: 10.4484% ( 21) 00:14:51.834 12332.684 - 12392.262: 10.7745% ( 24) 00:14:51.834 12392.262 - 12451.840: 11.1685% ( 29) 00:14:51.834 12451.840 - 12511.418: 11.5082% ( 25) 00:14:51.834 12511.418 - 12570.996: 11.9022% ( 29) 00:14:51.834 12570.996 - 12630.575: 12.2283% ( 24) 00:14:51.834 12630.575 - 12690.153: 12.6359% ( 30) 00:14:51.834 12690.153 - 12749.731: 12.9348% ( 22) 00:14:51.834 12749.731 - 12809.309: 13.2337% ( 22) 00:14:51.834 12809.309 - 12868.887: 13.5190% ( 21) 00:14:51.834 12868.887 - 12928.465: 13.7908% ( 20) 00:14:51.834 12928.465 - 12988.044: 14.1984% ( 30) 00:14:51.834 12988.044 - 13047.622: 14.5516% ( 26) 00:14:51.834 13047.622 - 13107.200: 14.9185% ( 27) 00:14:51.834 13107.200 - 13166.778: 15.3533% ( 32) 00:14:51.834 13166.778 - 13226.356: 15.8288% ( 35) 00:14:51.834 13226.356 - 13285.935: 16.3043% ( 35) 00:14:51.834 13285.935 - 13345.513: 16.6984% ( 29) 00:14:51.834 13345.513 - 13405.091: 17.1875% ( 36) 00:14:51.834 13405.091 - 13464.669: 17.6902% ( 37) 00:14:51.834 13464.669 - 13524.247: 18.0435% ( 26) 00:14:51.834 13524.247 - 13583.825: 18.4783% ( 32) 00:14:51.834 13583.825 - 13643.404: 18.9674% ( 36) 00:14:51.834 13643.404 - 13702.982: 19.3071% ( 25) 00:14:51.834 13702.982 - 13762.560: 19.7147% ( 30) 00:14:51.834 13762.560 - 13822.138: 20.1495% ( 32) 00:14:51.834 13822.138 - 13881.716: 20.5299% ( 28) 00:14:51.834 13881.716 - 13941.295: 20.9918% ( 34) 00:14:51.834 13941.295 - 14000.873: 21.5489% ( 41) 00:14:51.834 14000.873 - 14060.451: 22.0516% ( 37) 00:14:51.834 14060.451 - 14120.029: 22.4728% ( 31) 00:14:51.834 14120.029 - 14179.607: 22.8533% ( 28) 00:14:51.834 14179.607 - 14239.185: 23.2201% ( 27) 00:14:51.834 14239.185 - 14298.764: 23.6957% ( 35) 00:14:51.834 14298.764 - 14358.342: 24.4293% ( 54) 00:14:51.834 14358.342 - 14417.920: 25.1495% ( 53) 00:14:51.834 14417.920 - 14477.498: 25.8288% ( 50) 00:14:51.834 14477.498 - 14537.076: 26.4402% ( 45) 00:14:51.834 14537.076 - 14596.655: 27.0788% ( 47) 00:14:51.834 14596.655 - 14656.233: 27.6630% ( 43) 00:14:51.834 14656.233 - 14715.811: 28.2337% ( 42) 00:14:51.834 14715.811 - 14775.389: 28.7092% ( 35) 00:14:51.834 14775.389 - 14834.967: 29.1168% ( 30) 00:14:51.834 14834.967 - 14894.545: 29.6467% ( 39) 00:14:51.834 14894.545 - 14954.124: 30.3261% ( 50) 00:14:51.834 14954.124 - 15013.702: 31.0190% ( 51) 00:14:51.834 15013.702 - 15073.280: 31.6576% ( 47) 00:14:51.834 15073.280 - 15132.858: 32.2826% ( 46) 00:14:51.834 15132.858 - 15192.436: 33.0842% ( 59) 00:14:51.834 15192.436 - 15252.015: 33.7092% ( 46) 00:14:51.834 15252.015 - 15371.171: 34.8913% ( 87) 00:14:51.834 15371.171 - 15490.327: 36.1277% ( 91) 00:14:51.834 15490.327 - 15609.484: 37.1467% ( 75) 00:14:51.834 15609.484 - 15728.640: 38.1386% ( 73) 00:14:51.834 15728.640 - 15847.796: 39.0761% ( 69) 00:14:51.834 15847.796 - 15966.953: 39.9185% ( 62) 00:14:51.834 15966.953 - 16086.109: 40.7609% ( 62) 00:14:51.834 16086.109 - 16205.265: 41.5082% ( 55) 00:14:51.834 16205.265 - 16324.422: 42.0652% ( 41) 00:14:51.834 16324.422 - 16443.578: 42.6359% ( 42) 00:14:51.834 16443.578 - 16562.735: 43.1522% ( 38) 00:14:51.834 16562.735 - 16681.891: 43.9674% ( 60) 00:14:51.834 16681.891 - 16801.047: 44.7011% ( 54) 00:14:51.834 16801.047 - 16920.204: 45.7609% ( 78) 00:14:51.834 16920.204 - 17039.360: 46.4538% ( 51) 00:14:51.834 17039.360 - 17158.516: 47.1332% ( 50) 00:14:51.834 17158.516 - 17277.673: 47.8940% ( 56) 00:14:51.834 17277.673 - 17396.829: 48.7908% ( 66) 00:14:51.834 17396.829 - 17515.985: 49.8777% ( 80) 00:14:51.834 17515.985 - 17635.142: 51.0054% ( 83) 00:14:51.834 17635.142 - 17754.298: 52.0245% ( 75) 00:14:51.834 17754.298 - 17873.455: 53.0435% ( 75) 00:14:51.834 17873.455 - 17992.611: 53.9946% ( 70) 00:14:51.834 17992.611 - 18111.767: 55.0000% ( 74) 00:14:51.834 18111.767 - 18230.924: 55.9783% ( 72) 00:14:51.834 18230.924 - 18350.080: 57.1196% ( 84) 00:14:51.834 18350.080 - 18469.236: 58.3152% ( 88) 00:14:51.834 18469.236 - 18588.393: 59.5652% ( 92) 00:14:51.835 18588.393 - 18707.549: 61.1005% ( 113) 00:14:51.835 18707.549 - 18826.705: 62.6902% ( 117) 00:14:51.835 18826.705 - 18945.862: 64.0897% ( 103) 00:14:51.835 18945.862 - 19065.018: 65.3668% ( 94) 00:14:51.835 19065.018 - 19184.175: 66.5897% ( 90) 00:14:51.835 19184.175 - 19303.331: 68.2337% ( 121) 00:14:51.835 19303.331 - 19422.487: 70.0272% ( 132) 00:14:51.835 19422.487 - 19541.644: 71.6712% ( 121) 00:14:51.835 19541.644 - 19660.800: 73.2473% ( 116) 00:14:51.835 19660.800 - 19779.956: 74.6467% ( 103) 00:14:51.835 19779.956 - 19899.113: 76.1005% ( 107) 00:14:51.835 19899.113 - 20018.269: 77.4728% ( 101) 00:14:51.835 20018.269 - 20137.425: 78.8315% ( 100) 00:14:51.835 20137.425 - 20256.582: 80.1630% ( 98) 00:14:51.835 20256.582 - 20375.738: 81.5082% ( 99) 00:14:51.835 20375.738 - 20494.895: 83.0299% ( 112) 00:14:51.835 20494.895 - 20614.051: 84.3342% ( 96) 00:14:51.835 20614.051 - 20733.207: 85.5978% ( 93) 00:14:51.835 20733.207 - 20852.364: 86.8342% ( 91) 00:14:51.835 20852.364 - 20971.520: 87.8940% ( 78) 00:14:51.835 20971.520 - 21090.676: 89.0353% ( 84) 00:14:51.835 21090.676 - 21209.833: 90.0951% ( 78) 00:14:51.835 21209.833 - 21328.989: 90.9647% ( 64) 00:14:51.835 21328.989 - 21448.145: 91.7799% ( 60) 00:14:51.835 21448.145 - 21567.302: 92.5815% ( 59) 00:14:51.835 21567.302 - 21686.458: 93.2473% ( 49) 00:14:51.835 21686.458 - 21805.615: 93.7500% ( 37) 00:14:51.835 21805.615 - 21924.771: 94.1984% ( 33) 00:14:51.835 21924.771 - 22043.927: 94.6467% ( 33) 00:14:51.835 22043.927 - 22163.084: 95.0408% ( 29) 00:14:51.835 22163.084 - 22282.240: 95.3940% ( 26) 00:14:51.835 22282.240 - 22401.396: 95.6114% ( 16) 00:14:51.835 22401.396 - 22520.553: 95.9239% ( 23) 00:14:51.835 22520.553 - 22639.709: 96.1141% ( 14) 00:14:51.835 22639.709 - 22758.865: 96.3179% ( 15) 00:14:51.835 22758.865 - 22878.022: 96.4674% ( 11) 00:14:51.835 22878.022 - 22997.178: 96.6033% ( 10) 00:14:51.835 22997.178 - 23116.335: 96.7799% ( 13) 00:14:51.835 23116.335 - 23235.491: 96.9565% ( 13) 00:14:51.835 23235.491 - 23354.647: 97.1332% ( 13) 00:14:51.835 23354.647 - 23473.804: 97.3370% ( 15) 00:14:51.835 23473.804 - 23592.960: 97.5272% ( 14) 00:14:51.835 23592.960 - 23712.116: 97.7310% ( 15) 00:14:51.835 23712.116 - 23831.273: 97.8940% ( 12) 00:14:51.835 23831.273 - 23950.429: 98.0435% ( 11) 00:14:51.835 23950.429 - 24069.585: 98.1522% ( 8) 00:14:51.835 24069.585 - 24188.742: 98.2609% ( 8) 00:14:51.835 24188.742 - 24307.898: 98.3696% ( 8) 00:14:51.835 24307.898 - 24427.055: 98.4647% ( 7) 00:14:51.835 24427.055 - 24546.211: 98.5190% ( 4) 00:14:51.835 24546.211 - 24665.367: 98.5870% ( 5) 00:14:51.835 24665.367 - 24784.524: 98.6413% ( 4) 00:14:51.835 24784.524 - 24903.680: 98.6685% ( 2) 00:14:51.835 24903.680 - 25022.836: 98.7092% ( 3) 00:14:51.835 25022.836 - 25141.993: 98.7364% ( 2) 00:14:51.835 25141.993 - 25261.149: 98.7636% ( 2) 00:14:51.835 25261.149 - 25380.305: 98.8043% ( 3) 00:14:51.835 25380.305 - 25499.462: 98.8315% ( 2) 00:14:51.835 25499.462 - 25618.618: 98.8587% ( 2) 00:14:51.835 25618.618 - 25737.775: 98.8995% ( 3) 00:14:51.835 25737.775 - 25856.931: 98.9402% ( 3) 00:14:51.835 25856.931 - 25976.087: 98.9674% ( 2) 00:14:51.835 25976.087 - 26095.244: 99.0082% ( 3) 00:14:51.835 26095.244 - 26214.400: 99.0353% ( 2) 00:14:51.835 26214.400 - 26333.556: 99.0761% ( 3) 00:14:51.835 26333.556 - 26452.713: 99.1033% ( 2) 00:14:51.835 26452.713 - 26571.869: 99.1304% ( 2) 00:14:51.835 32887.156 - 33125.469: 99.1984% ( 5) 00:14:51.835 33125.469 - 33363.782: 99.2527% ( 4) 00:14:51.835 33363.782 - 33602.095: 99.3207% ( 5) 00:14:51.835 33602.095 - 33840.407: 99.3614% ( 3) 00:14:51.835 33840.407 - 34078.720: 99.4158% ( 4) 00:14:51.835 34078.720 - 34317.033: 99.4701% ( 4) 00:14:51.835 34317.033 - 34555.345: 99.5245% ( 4) 00:14:51.835 34555.345 - 34793.658: 99.5924% ( 5) 00:14:51.835 34793.658 - 35031.971: 99.6467% ( 4) 00:14:51.835 35031.971 - 35270.284: 99.7147% ( 5) 00:14:51.835 35270.284 - 35508.596: 99.7690% ( 4) 00:14:51.835 35508.596 - 35746.909: 99.8370% ( 5) 00:14:51.835 35746.909 - 35985.222: 99.8913% ( 4) 00:14:51.835 35985.222 - 36223.535: 99.9592% ( 5) 00:14:51.835 36223.535 - 36461.847: 100.0000% ( 3) 00:14:51.835 00:14:51.835 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:51.835 ============================================================================== 00:14:51.835 Range in us Cumulative IO count 00:14:51.835 9830.400 - 9889.978: 0.0136% ( 1) 00:14:51.835 9889.978 - 9949.556: 0.0679% ( 4) 00:14:51.835 9949.556 - 10009.135: 0.1359% ( 5) 00:14:51.835 10009.135 - 10068.713: 0.1495% ( 1) 00:14:51.835 10128.291 - 10187.869: 0.2310% ( 6) 00:14:51.835 10187.869 - 10247.447: 0.2853% ( 4) 00:14:51.835 10247.447 - 10307.025: 0.4212% ( 10) 00:14:51.835 10307.025 - 10366.604: 0.6250% ( 15) 00:14:51.835 10366.604 - 10426.182: 0.8696% ( 18) 00:14:51.835 10426.182 - 10485.760: 1.0598% ( 14) 00:14:51.835 10485.760 - 10545.338: 1.3451% ( 21) 00:14:51.835 10545.338 - 10604.916: 1.5353% ( 14) 00:14:51.835 10604.916 - 10664.495: 1.7799% ( 18) 00:14:51.835 10664.495 - 10724.073: 2.1467% ( 27) 00:14:51.835 10724.073 - 10783.651: 2.4457% ( 22) 00:14:51.835 10783.651 - 10843.229: 2.6223% ( 13) 00:14:51.835 10843.229 - 10902.807: 2.7989% ( 13) 00:14:51.835 10902.807 - 10962.385: 3.0571% ( 19) 00:14:51.835 10962.385 - 11021.964: 3.2880% ( 17) 00:14:51.835 11021.964 - 11081.542: 3.5054% ( 16) 00:14:51.835 11081.542 - 11141.120: 3.7636% ( 19) 00:14:51.835 11141.120 - 11200.698: 4.1033% ( 25) 00:14:51.835 11200.698 - 11260.276: 4.4973% ( 29) 00:14:51.835 11260.276 - 11319.855: 4.7826% ( 21) 00:14:51.835 11319.855 - 11379.433: 5.1495% ( 27) 00:14:51.835 11379.433 - 11439.011: 5.4620% ( 23) 00:14:51.835 11439.011 - 11498.589: 5.7065% ( 18) 00:14:51.835 11498.589 - 11558.167: 5.9375% ( 17) 00:14:51.835 11558.167 - 11617.745: 6.1957% ( 19) 00:14:51.835 11617.745 - 11677.324: 6.4130% ( 16) 00:14:51.835 11677.324 - 11736.902: 6.6304% ( 16) 00:14:51.835 11736.902 - 11796.480: 6.8750% ( 18) 00:14:51.835 11796.480 - 11856.058: 7.2011% ( 24) 00:14:51.835 11856.058 - 11915.636: 7.6359% ( 32) 00:14:51.835 11915.636 - 11975.215: 8.2065% ( 42) 00:14:51.835 11975.215 - 12034.793: 8.7500% ( 40) 00:14:51.835 12034.793 - 12094.371: 9.2255% ( 35) 00:14:51.835 12094.371 - 12153.949: 9.5788% ( 26) 00:14:51.835 12153.949 - 12213.527: 9.9049% ( 24) 00:14:51.835 12213.527 - 12273.105: 10.2174% ( 23) 00:14:51.835 12273.105 - 12332.684: 10.4891% ( 20) 00:14:51.835 12332.684 - 12392.262: 10.7745% ( 21) 00:14:51.835 12392.262 - 12451.840: 11.1277% ( 26) 00:14:51.835 12451.840 - 12511.418: 11.5625% ( 32) 00:14:51.835 12511.418 - 12570.996: 11.9837% ( 31) 00:14:51.835 12570.996 - 12630.575: 12.3641% ( 28) 00:14:51.835 12630.575 - 12690.153: 12.7446% ( 28) 00:14:51.835 12690.153 - 12749.731: 13.1114% ( 27) 00:14:51.835 12749.731 - 12809.309: 13.4647% ( 26) 00:14:51.835 12809.309 - 12868.887: 13.8315% ( 27) 00:14:51.835 12868.887 - 12928.465: 14.1576% ( 24) 00:14:51.835 12928.465 - 12988.044: 14.6060% ( 33) 00:14:51.835 12988.044 - 13047.622: 15.0000% ( 29) 00:14:51.835 13047.622 - 13107.200: 15.4076% ( 30) 00:14:51.835 13107.200 - 13166.778: 15.8016% ( 29) 00:14:51.835 13166.778 - 13226.356: 16.2092% ( 30) 00:14:51.835 13226.356 - 13285.935: 16.6033% ( 29) 00:14:51.835 13285.935 - 13345.513: 16.9022% ( 22) 00:14:51.835 13345.513 - 13405.091: 17.2147% ( 23) 00:14:51.835 13405.091 - 13464.669: 17.5679% ( 26) 00:14:51.835 13464.669 - 13524.247: 17.9212% ( 26) 00:14:51.835 13524.247 - 13583.825: 18.1929% ( 20) 00:14:51.835 13583.825 - 13643.404: 18.4918% ( 22) 00:14:51.835 13643.404 - 13702.982: 18.7772% ( 21) 00:14:51.835 13702.982 - 13762.560: 19.1168% ( 25) 00:14:51.835 13762.560 - 13822.138: 19.5380% ( 31) 00:14:51.835 13822.138 - 13881.716: 19.8913% ( 26) 00:14:51.835 13881.716 - 13941.295: 20.2446% ( 26) 00:14:51.835 13941.295 - 14000.873: 20.7473% ( 37) 00:14:51.835 14000.873 - 14060.451: 21.0870% ( 25) 00:14:51.835 14060.451 - 14120.029: 21.5897% ( 37) 00:14:51.835 14120.029 - 14179.607: 22.0245% ( 32) 00:14:51.835 14179.607 - 14239.185: 22.6087% ( 43) 00:14:51.836 14239.185 - 14298.764: 23.2337% ( 46) 00:14:51.836 14298.764 - 14358.342: 23.8723% ( 47) 00:14:51.836 14358.342 - 14417.920: 24.3886% ( 38) 00:14:51.836 14417.920 - 14477.498: 24.9592% ( 42) 00:14:51.836 14477.498 - 14537.076: 25.5842% ( 46) 00:14:51.836 14537.076 - 14596.655: 26.1957% ( 45) 00:14:51.836 14596.655 - 14656.233: 26.7663% ( 42) 00:14:51.836 14656.233 - 14715.811: 27.4864% ( 53) 00:14:51.836 14715.811 - 14775.389: 28.2880% ( 59) 00:14:51.836 14775.389 - 14834.967: 29.0353% ( 55) 00:14:51.836 14834.967 - 14894.545: 29.7690% ( 54) 00:14:51.836 14894.545 - 14954.124: 30.5299% ( 56) 00:14:51.836 14954.124 - 15013.702: 31.1957% ( 49) 00:14:51.836 15013.702 - 15073.280: 31.8750% ( 50) 00:14:51.836 15073.280 - 15132.858: 32.6766% ( 59) 00:14:51.836 15132.858 - 15192.436: 33.3696% ( 51) 00:14:51.836 15192.436 - 15252.015: 34.0625% ( 51) 00:14:51.836 15252.015 - 15371.171: 35.2446% ( 87) 00:14:51.836 15371.171 - 15490.327: 36.4538% ( 89) 00:14:51.836 15490.327 - 15609.484: 37.6766% ( 90) 00:14:51.836 15609.484 - 15728.640: 38.8179% ( 84) 00:14:51.836 15728.640 - 15847.796: 39.8641% ( 77) 00:14:51.836 15847.796 - 15966.953: 40.7065% ( 62) 00:14:51.836 15966.953 - 16086.109: 41.5353% ( 61) 00:14:51.836 16086.109 - 16205.265: 42.2690% ( 54) 00:14:51.836 16205.265 - 16324.422: 42.8804% ( 45) 00:14:51.836 16324.422 - 16443.578: 43.3152% ( 32) 00:14:51.836 16443.578 - 16562.735: 43.9266% ( 45) 00:14:51.836 16562.735 - 16681.891: 44.5109% ( 43) 00:14:51.836 16681.891 - 16801.047: 45.5978% ( 80) 00:14:51.836 16801.047 - 16920.204: 46.6440% ( 77) 00:14:51.836 16920.204 - 17039.360: 47.4728% ( 61) 00:14:51.836 17039.360 - 17158.516: 48.2201% ( 55) 00:14:51.836 17158.516 - 17277.673: 48.9538% ( 54) 00:14:51.836 17277.673 - 17396.829: 49.7418% ( 58) 00:14:51.836 17396.829 - 17515.985: 50.5435% ( 59) 00:14:51.836 17515.985 - 17635.142: 51.3587% ( 60) 00:14:51.836 17635.142 - 17754.298: 52.0924% ( 54) 00:14:51.836 17754.298 - 17873.455: 52.9212% ( 61) 00:14:51.836 17873.455 - 17992.611: 53.7364% ( 60) 00:14:51.836 17992.611 - 18111.767: 54.7011% ( 71) 00:14:51.836 18111.767 - 18230.924: 55.7201% ( 75) 00:14:51.836 18230.924 - 18350.080: 56.7527% ( 76) 00:14:51.836 18350.080 - 18469.236: 58.0978% ( 99) 00:14:51.836 18469.236 - 18588.393: 59.3342% ( 91) 00:14:51.836 18588.393 - 18707.549: 60.8152% ( 109) 00:14:51.836 18707.549 - 18826.705: 62.2418% ( 105) 00:14:51.836 18826.705 - 18945.862: 63.5326% ( 95) 00:14:51.836 18945.862 - 19065.018: 64.9864% ( 107) 00:14:51.836 19065.018 - 19184.175: 66.4810% ( 110) 00:14:51.836 19184.175 - 19303.331: 68.2065% ( 127) 00:14:51.836 19303.331 - 19422.487: 69.6739% ( 108) 00:14:51.836 19422.487 - 19541.644: 71.2092% ( 113) 00:14:51.836 19541.644 - 19660.800: 72.8125% ( 118) 00:14:51.836 19660.800 - 19779.956: 74.3750% ( 115) 00:14:51.836 19779.956 - 19899.113: 75.8967% ( 112) 00:14:51.836 19899.113 - 20018.269: 77.3913% ( 110) 00:14:51.836 20018.269 - 20137.425: 78.7364% ( 99) 00:14:51.836 20137.425 - 20256.582: 80.1766% ( 106) 00:14:51.836 20256.582 - 20375.738: 81.6304% ( 107) 00:14:51.836 20375.738 - 20494.895: 83.0435% ( 104) 00:14:51.836 20494.895 - 20614.051: 84.4429% ( 103) 00:14:51.836 20614.051 - 20733.207: 85.7745% ( 98) 00:14:51.836 20733.207 - 20852.364: 86.9293% ( 85) 00:14:51.836 20852.364 - 20971.520: 88.0707% ( 84) 00:14:51.836 20971.520 - 21090.676: 89.2527% ( 87) 00:14:51.836 21090.676 - 21209.833: 90.4484% ( 88) 00:14:51.836 21209.833 - 21328.989: 91.4674% ( 75) 00:14:51.836 21328.989 - 21448.145: 92.4049% ( 69) 00:14:51.836 21448.145 - 21567.302: 93.1929% ( 58) 00:14:51.836 21567.302 - 21686.458: 93.7772% ( 43) 00:14:51.836 21686.458 - 21805.615: 94.3614% ( 43) 00:14:51.836 21805.615 - 21924.771: 94.7147% ( 26) 00:14:51.836 21924.771 - 22043.927: 95.1766% ( 34) 00:14:51.836 22043.927 - 22163.084: 95.6250% ( 33) 00:14:51.836 22163.084 - 22282.240: 96.0190% ( 29) 00:14:51.836 22282.240 - 22401.396: 96.4266% ( 30) 00:14:51.836 22401.396 - 22520.553: 96.7391% ( 23) 00:14:51.836 22520.553 - 22639.709: 97.0109% ( 20) 00:14:51.836 22639.709 - 22758.865: 97.2418% ( 17) 00:14:51.836 22758.865 - 22878.022: 97.4321% ( 14) 00:14:51.836 22878.022 - 22997.178: 97.6087% ( 13) 00:14:51.836 22997.178 - 23116.335: 97.7446% ( 10) 00:14:51.836 23116.335 - 23235.491: 97.8804% ( 10) 00:14:51.836 23235.491 - 23354.647: 97.9891% ( 8) 00:14:51.836 23354.647 - 23473.804: 98.0978% ( 8) 00:14:51.836 23473.804 - 23592.960: 98.1929% ( 7) 00:14:51.836 23592.960 - 23712.116: 98.3288% ( 10) 00:14:51.836 23712.116 - 23831.273: 98.4375% ( 8) 00:14:51.836 23831.273 - 23950.429: 98.5598% ( 9) 00:14:51.836 23950.429 - 24069.585: 98.6685% ( 8) 00:14:51.836 24069.585 - 24188.742: 98.7908% ( 9) 00:14:51.836 24188.742 - 24307.898: 98.8995% ( 8) 00:14:51.836 24307.898 - 24427.055: 99.0217% ( 9) 00:14:51.836 24427.055 - 24546.211: 99.1033% ( 6) 00:14:51.836 24546.211 - 24665.367: 99.1304% ( 2) 00:14:51.836 28835.840 - 28954.996: 99.1712% ( 3) 00:14:51.836 28954.996 - 29074.153: 99.1984% ( 2) 00:14:51.836 29074.153 - 29193.309: 99.2255% ( 2) 00:14:51.836 29193.309 - 29312.465: 99.2527% ( 2) 00:14:51.836 29312.465 - 29431.622: 99.2935% ( 3) 00:14:51.836 29431.622 - 29550.778: 99.3207% ( 2) 00:14:51.836 29550.778 - 29669.935: 99.3478% ( 2) 00:14:51.836 29669.935 - 29789.091: 99.3750% ( 2) 00:14:51.836 29789.091 - 29908.247: 99.4022% ( 2) 00:14:51.836 29908.247 - 30027.404: 99.4293% ( 2) 00:14:51.836 30027.404 - 30146.560: 99.4701% ( 3) 00:14:51.836 30146.560 - 30265.716: 99.4973% ( 2) 00:14:51.836 30265.716 - 30384.873: 99.5245% ( 2) 00:14:51.836 30384.873 - 30504.029: 99.5652% ( 3) 00:14:51.836 30504.029 - 30742.342: 99.6196% ( 4) 00:14:51.836 30742.342 - 30980.655: 99.6739% ( 4) 00:14:51.836 30980.655 - 31218.967: 99.7418% ( 5) 00:14:51.836 31218.967 - 31457.280: 99.7962% ( 4) 00:14:51.836 31457.280 - 31695.593: 99.8641% ( 5) 00:14:51.836 31695.593 - 31933.905: 99.9185% ( 4) 00:14:51.836 31933.905 - 32172.218: 99.9864% ( 5) 00:14:51.836 32172.218 - 32410.531: 100.0000% ( 1) 00:14:51.836 00:14:51.836 13:34:43 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:51.836 ************************************ 00:14:51.836 END TEST nvme_perf 00:14:51.836 ************************************ 00:14:51.836 00:14:51.836 real 0m2.801s 00:14:51.836 user 0m2.370s 00:14:51.836 sys 0m0.299s 00:14:51.836 13:34:43 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.836 13:34:43 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:51.836 13:34:43 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:51.836 13:34:43 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:51.836 13:34:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.836 13:34:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.836 ************************************ 00:14:51.836 START TEST nvme_hello_world 00:14:51.836 ************************************ 00:14:51.836 13:34:43 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:52.095 Initializing NVMe Controllers 00:14:52.095 Attached to 0000:00:10.0 00:14:52.095 Namespace ID: 1 size: 6GB 00:14:52.095 Attached to 0000:00:11.0 00:14:52.095 Namespace ID: 1 size: 5GB 00:14:52.095 Attached to 0000:00:13.0 00:14:52.095 Namespace ID: 1 size: 1GB 00:14:52.095 Attached to 0000:00:12.0 00:14:52.095 Namespace ID: 1 size: 4GB 00:14:52.095 Namespace ID: 2 size: 4GB 00:14:52.095 Namespace ID: 3 size: 4GB 00:14:52.095 Initialization complete. 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 INFO: using host memory buffer for IO 00:14:52.095 Hello world! 00:14:52.095 ************************************ 00:14:52.095 END TEST nvme_hello_world 00:14:52.095 ************************************ 00:14:52.095 00:14:52.095 real 0m0.362s 00:14:52.095 user 0m0.154s 00:14:52.095 sys 0m0.159s 00:14:52.095 13:34:44 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.095 13:34:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 13:34:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:52.095 13:34:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:52.095 13:34:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.095 13:34:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 ************************************ 00:14:52.095 START TEST nvme_sgl 00:14:52.095 ************************************ 00:14:52.095 13:34:44 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:52.661 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:52.661 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:52.661 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:52.661 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:52.661 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:52.661 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:52.661 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:52.661 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:52.661 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:52.661 NVMe Readv/Writev Request test 00:14:52.661 Attached to 0000:00:10.0 00:14:52.661 Attached to 0000:00:11.0 00:14:52.661 Attached to 0000:00:13.0 00:14:52.661 Attached to 0000:00:12.0 00:14:52.661 0000:00:10.0: build_io_request_2 test passed 00:14:52.661 0000:00:10.0: build_io_request_4 test passed 00:14:52.661 0000:00:10.0: build_io_request_5 test passed 00:14:52.661 0000:00:10.0: build_io_request_6 test passed 00:14:52.661 0000:00:10.0: build_io_request_7 test passed 00:14:52.661 0000:00:10.0: build_io_request_10 test passed 00:14:52.661 0000:00:11.0: build_io_request_2 test passed 00:14:52.661 0000:00:11.0: build_io_request_4 test passed 00:14:52.661 0000:00:11.0: build_io_request_5 test passed 00:14:52.661 0000:00:11.0: build_io_request_6 test passed 00:14:52.661 0000:00:11.0: build_io_request_7 test passed 00:14:52.661 0000:00:11.0: build_io_request_10 test passed 00:14:52.661 Cleaning up... 00:14:52.661 00:14:52.661 real 0m0.492s 00:14:52.661 user 0m0.266s 00:14:52.661 sys 0m0.163s 00:14:52.662 13:34:44 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.662 13:34:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:52.662 ************************************ 00:14:52.662 END TEST nvme_sgl 00:14:52.662 ************************************ 00:14:52.662 13:34:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:52.662 13:34:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:52.662 13:34:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.662 13:34:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.662 ************************************ 00:14:52.662 START TEST nvme_e2edp 00:14:52.662 ************************************ 00:14:52.662 13:34:44 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:53.228 NVMe Write/Read with End-to-End data protection test 00:14:53.228 Attached to 0000:00:10.0 00:14:53.228 Attached to 0000:00:11.0 00:14:53.228 Attached to 0000:00:13.0 00:14:53.228 Attached to 0000:00:12.0 00:14:53.228 Cleaning up... 00:14:53.228 ************************************ 00:14:53.228 END TEST nvme_e2edp 00:14:53.228 ************************************ 00:14:53.228 00:14:53.228 real 0m0.384s 00:14:53.228 user 0m0.149s 00:14:53.228 sys 0m0.185s 00:14:53.228 13:34:44 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.228 13:34:44 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:53.228 13:34:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:53.228 13:34:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.228 13:34:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.228 13:34:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.228 ************************************ 00:14:53.228 START TEST nvme_reserve 00:14:53.228 ************************************ 00:14:53.228 13:34:45 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:53.486 ===================================================== 00:14:53.486 NVMe Controller at PCI bus 0, device 16, function 0 00:14:53.486 ===================================================== 00:14:53.486 Reservations: Not Supported 00:14:53.486 ===================================================== 00:14:53.486 NVMe Controller at PCI bus 0, device 17, function 0 00:14:53.486 ===================================================== 00:14:53.486 Reservations: Not Supported 00:14:53.486 ===================================================== 00:14:53.486 NVMe Controller at PCI bus 0, device 19, function 0 00:14:53.486 ===================================================== 00:14:53.486 Reservations: Not Supported 00:14:53.486 ===================================================== 00:14:53.486 NVMe Controller at PCI bus 0, device 18, function 0 00:14:53.486 ===================================================== 00:14:53.486 Reservations: Not Supported 00:14:53.486 Reservation test passed 00:14:53.487 ************************************ 00:14:53.487 END TEST nvme_reserve 00:14:53.487 ************************************ 00:14:53.487 00:14:53.487 real 0m0.381s 00:14:53.487 user 0m0.149s 00:14:53.487 sys 0m0.184s 00:14:53.487 13:34:45 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.487 13:34:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:53.487 13:34:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:53.487 13:34:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.487 13:34:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.487 13:34:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.487 ************************************ 00:14:53.487 START TEST nvme_err_injection 00:14:53.487 ************************************ 00:14:53.487 13:34:45 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:54.053 NVMe Error Injection test 00:14:54.053 Attached to 0000:00:10.0 00:14:54.053 Attached to 0000:00:11.0 00:14:54.053 Attached to 0000:00:13.0 00:14:54.053 Attached to 0000:00:12.0 00:14:54.053 0000:00:10.0: get features failed as expected 00:14:54.053 0000:00:11.0: get features failed as expected 00:14:54.053 0000:00:13.0: get features failed as expected 00:14:54.053 0000:00:12.0: get features failed as expected 00:14:54.053 0000:00:10.0: get features successfully as expected 00:14:54.053 0000:00:11.0: get features successfully as expected 00:14:54.053 0000:00:13.0: get features successfully as expected 00:14:54.053 0000:00:12.0: get features successfully as expected 00:14:54.053 0000:00:10.0: read failed as expected 00:14:54.053 0000:00:11.0: read failed as expected 00:14:54.053 0000:00:13.0: read failed as expected 00:14:54.053 0000:00:12.0: read failed as expected 00:14:54.053 0000:00:10.0: read successfully as expected 00:14:54.053 0000:00:11.0: read successfully as expected 00:14:54.053 0000:00:13.0: read successfully as expected 00:14:54.053 0000:00:12.0: read successfully as expected 00:14:54.053 Cleaning up... 00:14:54.053 ************************************ 00:14:54.053 END TEST nvme_err_injection 00:14:54.053 ************************************ 00:14:54.053 00:14:54.053 real 0m0.358s 00:14:54.053 user 0m0.141s 00:14:54.053 sys 0m0.166s 00:14:54.053 13:34:45 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.053 13:34:45 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:54.053 13:34:45 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:54.053 13:34:45 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:14:54.053 13:34:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.053 13:34:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.053 ************************************ 00:14:54.053 START TEST nvme_overhead 00:14:54.053 ************************************ 00:14:54.053 13:34:45 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:55.427 Initializing NVMe Controllers 00:14:55.427 Attached to 0000:00:10.0 00:14:55.427 Attached to 0000:00:11.0 00:14:55.427 Attached to 0000:00:13.0 00:14:55.427 Attached to 0000:00:12.0 00:14:55.427 Initialization complete. Launching workers. 00:14:55.427 submit (in ns) avg, min, max = 20585.4, 14160.0, 129296.8 00:14:55.427 complete (in ns) avg, min, max = 14246.7, 9453.6, 100205.5 00:14:55.427 00:14:55.427 Submit histogram 00:14:55.427 ================ 00:14:55.427 Range in us Cumulative Count 00:14:55.427 14.138 - 14.196: 0.0086% ( 1) 00:14:55.427 14.255 - 14.313: 0.0257% ( 2) 00:14:55.427 14.313 - 14.371: 0.1115% ( 10) 00:14:55.427 14.371 - 14.429: 0.5061% ( 46) 00:14:55.427 14.429 - 14.487: 1.7414% ( 144) 00:14:55.427 14.487 - 14.545: 3.5772% ( 214) 00:14:55.427 14.545 - 14.604: 6.5712% ( 349) 00:14:55.427 14.604 - 14.662: 10.4658% ( 454) 00:14:55.427 14.662 - 14.720: 14.3776% ( 456) 00:14:55.427 14.720 - 14.778: 18.2037% ( 446) 00:14:55.427 14.778 - 14.836: 21.0689% ( 334) 00:14:55.427 14.836 - 14.895: 23.2135% ( 250) 00:14:55.427 14.895 - 15.011: 25.7270% ( 293) 00:14:55.427 15.011 - 15.127: 27.2626% ( 179) 00:14:55.427 15.127 - 15.244: 28.4207% ( 135) 00:14:55.427 15.244 - 15.360: 29.3986% ( 114) 00:14:55.427 15.360 - 15.476: 30.0763% ( 79) 00:14:55.427 15.476 - 15.593: 30.5825% ( 59) 00:14:55.427 15.593 - 15.709: 31.1401% ( 65) 00:14:55.427 15.709 - 15.825: 31.6376% ( 58) 00:14:55.427 15.825 - 15.942: 32.1438% ( 59) 00:14:55.427 15.942 - 16.058: 32.5899% ( 52) 00:14:55.427 16.058 - 16.175: 32.9330% ( 40) 00:14:55.427 16.175 - 16.291: 33.2161% ( 33) 00:14:55.427 16.291 - 16.407: 33.4477% ( 27) 00:14:55.427 16.407 - 16.524: 33.6107% ( 19) 00:14:55.427 16.524 - 16.640: 33.7737% ( 19) 00:14:55.427 16.640 - 16.756: 33.9281% ( 18) 00:14:55.427 16.756 - 16.873: 34.1168% ( 22) 00:14:55.427 16.873 - 16.989: 34.2970% ( 21) 00:14:55.427 16.989 - 17.105: 34.3656% ( 8) 00:14:55.427 17.105 - 17.222: 34.4257% ( 7) 00:14:55.427 17.222 - 17.338: 34.4514% ( 3) 00:14:55.427 17.338 - 17.455: 34.4857% ( 4) 00:14:55.427 17.455 - 17.571: 34.5115% ( 3) 00:14:55.427 17.571 - 17.687: 34.5286% ( 2) 00:14:55.427 17.687 - 17.804: 34.6144% ( 10) 00:14:55.427 17.804 - 17.920: 34.6744% ( 7) 00:14:55.427 17.920 - 18.036: 34.7088% ( 4) 00:14:55.427 18.036 - 18.153: 34.7517% ( 5) 00:14:55.427 18.153 - 18.269: 34.7860% ( 4) 00:14:55.427 18.269 - 18.385: 34.8289% ( 5) 00:14:55.427 18.385 - 18.502: 34.8803% ( 6) 00:14:55.427 18.502 - 18.618: 34.9232% ( 5) 00:14:55.427 18.618 - 18.735: 34.9747% ( 6) 00:14:55.427 18.735 - 18.851: 35.0262% ( 6) 00:14:55.427 18.851 - 18.967: 35.0433% ( 2) 00:14:55.427 18.967 - 19.084: 35.1119% ( 8) 00:14:55.427 19.084 - 19.200: 35.1548% ( 5) 00:14:55.427 19.200 - 19.316: 35.1806% ( 3) 00:14:55.427 19.316 - 19.433: 35.2235% ( 5) 00:14:55.427 19.433 - 19.549: 35.2749% ( 6) 00:14:55.427 19.549 - 19.665: 35.3693% ( 11) 00:14:55.427 19.665 - 19.782: 35.5066% ( 16) 00:14:55.427 19.782 - 19.898: 35.6352% ( 15) 00:14:55.427 19.898 - 20.015: 35.8926% ( 30) 00:14:55.427 20.015 - 20.131: 36.1928% ( 35) 00:14:55.427 20.131 - 20.247: 36.6647% ( 55) 00:14:55.427 20.247 - 20.364: 36.9735% ( 36) 00:14:55.427 20.364 - 20.480: 37.4539% ( 56) 00:14:55.427 20.480 - 20.596: 37.9943% ( 63) 00:14:55.427 20.596 - 20.713: 38.5519% ( 65) 00:14:55.427 20.713 - 20.829: 39.3755% ( 96) 00:14:55.427 20.829 - 20.945: 40.3449% ( 113) 00:14:55.427 20.945 - 21.062: 41.5459% ( 140) 00:14:55.428 21.062 - 21.178: 42.6782% ( 132) 00:14:55.428 21.178 - 21.295: 44.2309% ( 181) 00:14:55.428 21.295 - 21.411: 46.0410% ( 211) 00:14:55.428 21.411 - 21.527: 48.0570% ( 235) 00:14:55.428 21.527 - 21.644: 50.1501% ( 244) 00:14:55.428 21.644 - 21.760: 52.4749% ( 271) 00:14:55.428 21.760 - 21.876: 54.6367% ( 252) 00:14:55.428 21.876 - 21.993: 56.9100% ( 265) 00:14:55.428 21.993 - 22.109: 59.3892% ( 289) 00:14:55.428 22.109 - 22.225: 62.1086% ( 317) 00:14:55.428 22.225 - 22.342: 64.4677% ( 275) 00:14:55.428 22.342 - 22.458: 66.9898% ( 294) 00:14:55.428 22.458 - 22.575: 69.2459% ( 263) 00:14:55.428 22.575 - 22.691: 71.0474% ( 210) 00:14:55.428 22.691 - 22.807: 73.1149% ( 241) 00:14:55.428 22.807 - 22.924: 75.0279% ( 223) 00:14:55.428 22.924 - 23.040: 76.6921% ( 194) 00:14:55.428 23.040 - 23.156: 78.0990% ( 164) 00:14:55.428 23.156 - 23.273: 79.3600% ( 147) 00:14:55.428 23.273 - 23.389: 80.4066% ( 122) 00:14:55.428 23.389 - 23.505: 81.5047% ( 128) 00:14:55.428 23.505 - 23.622: 82.4826% ( 114) 00:14:55.428 23.622 - 23.738: 83.2719% ( 92) 00:14:55.428 23.738 - 23.855: 84.0353% ( 89) 00:14:55.428 23.855 - 23.971: 84.8074% ( 90) 00:14:55.428 23.971 - 24.087: 85.4508% ( 75) 00:14:55.428 24.087 - 24.204: 86.1457% ( 81) 00:14:55.428 24.204 - 24.320: 86.8319% ( 80) 00:14:55.428 24.320 - 24.436: 87.4153% ( 68) 00:14:55.428 24.436 - 24.553: 88.1702% ( 88) 00:14:55.428 24.553 - 24.669: 88.7707% ( 70) 00:14:55.428 24.669 - 24.785: 89.4913% ( 84) 00:14:55.428 24.785 - 24.902: 90.1004% ( 71) 00:14:55.428 24.902 - 25.018: 90.6580% ( 65) 00:14:55.428 25.018 - 25.135: 91.1298% ( 55) 00:14:55.428 25.135 - 25.251: 91.5930% ( 54) 00:14:55.428 25.251 - 25.367: 92.1850% ( 69) 00:14:55.428 25.367 - 25.484: 92.5024% ( 37) 00:14:55.428 25.484 - 25.600: 92.8283% ( 38) 00:14:55.428 25.600 - 25.716: 93.0943% ( 31) 00:14:55.428 25.716 - 25.833: 93.3431% ( 29) 00:14:55.428 25.833 - 25.949: 93.5661% ( 26) 00:14:55.428 25.949 - 26.065: 93.8921% ( 38) 00:14:55.428 26.065 - 26.182: 94.0637% ( 20) 00:14:55.428 26.182 - 26.298: 94.2352% ( 20) 00:14:55.428 26.298 - 26.415: 94.4068% ( 20) 00:14:55.428 26.415 - 26.531: 94.5355% ( 15) 00:14:55.428 26.531 - 26.647: 94.6899% ( 18) 00:14:55.428 26.647 - 26.764: 94.7757% ( 10) 00:14:55.428 26.764 - 26.880: 94.8271% ( 6) 00:14:55.428 26.880 - 26.996: 94.8786% ( 6) 00:14:55.428 26.996 - 27.113: 94.9901% ( 13) 00:14:55.428 27.113 - 27.229: 95.0845% ( 11) 00:14:55.428 27.229 - 27.345: 95.1360% ( 6) 00:14:55.428 27.345 - 27.462: 95.2303% ( 11) 00:14:55.428 27.462 - 27.578: 95.3161% ( 10) 00:14:55.428 27.578 - 27.695: 95.3504% ( 4) 00:14:55.428 27.695 - 27.811: 95.4105% ( 7) 00:14:55.428 27.811 - 27.927: 95.4448% ( 4) 00:14:55.428 27.927 - 28.044: 95.5134% ( 8) 00:14:55.428 28.044 - 28.160: 95.5392% ( 3) 00:14:55.428 28.160 - 28.276: 95.6164% ( 9) 00:14:55.428 28.276 - 28.393: 95.6678% ( 6) 00:14:55.428 28.393 - 28.509: 95.7022% ( 4) 00:14:55.428 28.509 - 28.625: 95.7450% ( 5) 00:14:55.428 28.625 - 28.742: 95.7879% ( 5) 00:14:55.428 28.742 - 28.858: 95.8051% ( 2) 00:14:55.428 28.858 - 28.975: 95.8308% ( 3) 00:14:55.428 28.975 - 29.091: 95.8823% ( 6) 00:14:55.428 29.091 - 29.207: 95.9338% ( 6) 00:14:55.428 29.207 - 29.324: 95.9509% ( 2) 00:14:55.428 29.324 - 29.440: 95.9852% ( 4) 00:14:55.428 29.440 - 29.556: 95.9938% ( 1) 00:14:55.428 29.556 - 29.673: 96.0110% ( 2) 00:14:55.428 29.673 - 29.789: 96.0196% ( 1) 00:14:55.428 29.789 - 30.022: 96.0453% ( 3) 00:14:55.428 30.022 - 30.255: 96.0796% ( 4) 00:14:55.428 30.255 - 30.487: 96.1225% ( 5) 00:14:55.428 30.487 - 30.720: 96.1482% ( 3) 00:14:55.428 30.720 - 30.953: 96.1911% ( 5) 00:14:55.428 30.953 - 31.185: 96.2254% ( 4) 00:14:55.428 31.185 - 31.418: 96.2769% ( 6) 00:14:55.428 31.418 - 31.651: 96.3370% ( 7) 00:14:55.428 31.651 - 31.884: 96.4056% ( 8) 00:14:55.428 31.884 - 32.116: 96.4571% ( 6) 00:14:55.428 32.116 - 32.349: 96.5257% ( 8) 00:14:55.428 32.349 - 32.582: 96.6201% ( 11) 00:14:55.428 32.582 - 32.815: 96.7058% ( 10) 00:14:55.428 32.815 - 33.047: 96.8431% ( 16) 00:14:55.428 33.047 - 33.280: 96.9632% ( 14) 00:14:55.428 33.280 - 33.513: 97.0404% ( 9) 00:14:55.428 33.513 - 33.745: 97.1262% ( 10) 00:14:55.428 33.745 - 33.978: 97.2806% ( 18) 00:14:55.428 33.978 - 34.211: 97.3578% ( 9) 00:14:55.428 34.211 - 34.444: 97.4522% ( 11) 00:14:55.428 34.444 - 34.676: 97.5036% ( 6) 00:14:55.428 34.676 - 34.909: 97.5294% ( 3) 00:14:55.428 34.909 - 35.142: 97.5723% ( 5) 00:14:55.428 35.142 - 35.375: 97.6323% ( 7) 00:14:55.428 35.375 - 35.607: 97.7010% ( 8) 00:14:55.428 35.607 - 35.840: 97.7353% ( 4) 00:14:55.428 35.840 - 36.073: 97.7953% ( 7) 00:14:55.428 36.073 - 36.305: 97.8811% ( 10) 00:14:55.428 36.305 - 36.538: 97.9326% ( 6) 00:14:55.428 36.538 - 36.771: 98.0098% ( 9) 00:14:55.428 36.771 - 37.004: 98.1127% ( 12) 00:14:55.428 37.004 - 37.236: 98.2242% ( 13) 00:14:55.428 37.236 - 37.469: 98.3014% ( 9) 00:14:55.428 37.469 - 37.702: 98.4215% ( 14) 00:14:55.428 37.702 - 37.935: 98.4902% ( 8) 00:14:55.428 37.935 - 38.167: 98.5588% ( 8) 00:14:55.428 38.167 - 38.400: 98.6360% ( 9) 00:14:55.428 38.400 - 38.633: 98.7046% ( 8) 00:14:55.428 38.633 - 38.865: 98.7990% ( 11) 00:14:55.428 38.865 - 39.098: 98.8934% ( 11) 00:14:55.428 39.098 - 39.331: 98.9792% ( 10) 00:14:55.428 39.331 - 39.564: 99.0564% ( 9) 00:14:55.428 39.564 - 39.796: 99.1421% ( 10) 00:14:55.428 39.796 - 40.029: 99.1765% ( 4) 00:14:55.428 40.029 - 40.262: 99.2194% ( 5) 00:14:55.428 40.262 - 40.495: 99.2794% ( 7) 00:14:55.428 40.495 - 40.727: 99.3223% ( 5) 00:14:55.428 40.727 - 40.960: 99.3738% ( 6) 00:14:55.428 40.960 - 41.193: 99.4167% ( 5) 00:14:55.428 41.193 - 41.425: 99.4424% ( 3) 00:14:55.428 41.425 - 41.658: 99.4939% ( 6) 00:14:55.428 41.658 - 41.891: 99.5282% ( 4) 00:14:55.428 41.891 - 42.124: 99.5711% ( 5) 00:14:55.428 42.124 - 42.356: 99.5797% ( 1) 00:14:55.428 42.356 - 42.589: 99.5968% ( 2) 00:14:55.428 42.589 - 42.822: 99.6054% ( 1) 00:14:55.428 42.822 - 43.055: 99.6140% ( 1) 00:14:55.428 43.055 - 43.287: 99.6569% ( 5) 00:14:55.428 43.287 - 43.520: 99.6740% ( 2) 00:14:55.428 43.520 - 43.753: 99.6912% ( 2) 00:14:55.428 43.753 - 43.985: 99.6998% ( 1) 00:14:55.428 43.985 - 44.218: 99.7083% ( 1) 00:14:55.428 44.218 - 44.451: 99.7169% ( 1) 00:14:55.428 45.149 - 45.382: 99.7255% ( 1) 00:14:55.428 45.615 - 45.847: 99.7684% ( 5) 00:14:55.428 45.847 - 46.080: 99.7770% ( 1) 00:14:55.428 46.080 - 46.313: 99.7855% ( 1) 00:14:55.428 46.313 - 46.545: 99.7941% ( 1) 00:14:55.428 46.545 - 46.778: 99.8027% ( 1) 00:14:55.428 47.244 - 47.476: 99.8113% ( 1) 00:14:55.428 47.709 - 47.942: 99.8284% ( 2) 00:14:55.428 47.942 - 48.175: 99.8370% ( 1) 00:14:55.428 48.640 - 48.873: 99.8456% ( 1) 00:14:55.428 49.105 - 49.338: 99.8627% ( 2) 00:14:55.428 49.571 - 49.804: 99.8713% ( 1) 00:14:55.428 49.804 - 50.036: 99.8799% ( 1) 00:14:55.428 50.735 - 50.967: 99.8885% ( 1) 00:14:55.428 50.967 - 51.200: 99.8971% ( 1) 00:14:55.428 52.364 - 52.596: 99.9056% ( 1) 00:14:55.428 54.458 - 54.691: 99.9142% ( 1) 00:14:55.428 55.156 - 55.389: 99.9314% ( 2) 00:14:55.428 55.622 - 55.855: 99.9400% ( 1) 00:14:55.428 56.087 - 56.320: 99.9485% ( 1) 00:14:55.428 64.233 - 64.698: 99.9571% ( 1) 00:14:55.428 71.215 - 71.680: 99.9657% ( 1) 00:14:55.428 73.542 - 74.007: 99.9743% ( 1) 00:14:55.428 74.938 - 75.404: 99.9828% ( 1) 00:14:55.428 90.764 - 91.229: 99.9914% ( 1) 00:14:55.428 128.465 - 129.396: 100.0000% ( 1) 00:14:55.428 00:14:55.428 Complete histogram 00:14:55.428 ================== 00:14:55.428 Range in us Cumulative Count 00:14:55.428 9.425 - 9.484: 0.0429% ( 5) 00:14:55.428 9.484 - 9.542: 0.2574% ( 25) 00:14:55.428 9.542 - 9.600: 1.5356% ( 149) 00:14:55.428 9.600 - 9.658: 5.4559% ( 457) 00:14:55.428 9.658 - 9.716: 11.2379% ( 674) 00:14:55.428 9.716 - 9.775: 17.7061% ( 754) 00:14:55.428 9.775 - 9.833: 22.4329% ( 551) 00:14:55.428 9.833 - 9.891: 25.3067% ( 335) 00:14:55.428 9.891 - 9.949: 26.9881% ( 196) 00:14:55.428 9.949 - 10.007: 27.9832% ( 116) 00:14:55.428 10.007 - 10.065: 28.6609% ( 79) 00:14:55.428 10.065 - 10.124: 28.9697% ( 36) 00:14:55.428 10.124 - 10.182: 29.2871% ( 37) 00:14:55.428 10.182 - 10.240: 29.4501% ( 19) 00:14:55.428 10.240 - 10.298: 29.5702% ( 14) 00:14:55.428 10.298 - 10.356: 29.7075% ( 16) 00:14:55.428 10.356 - 10.415: 29.8104% ( 12) 00:14:55.428 10.415 - 10.473: 29.9391% ( 15) 00:14:55.428 10.473 - 10.531: 30.0420% ( 12) 00:14:55.428 10.531 - 10.589: 30.2479% ( 24) 00:14:55.428 10.589 - 10.647: 30.4967% ( 29) 00:14:55.429 10.647 - 10.705: 30.7969% ( 35) 00:14:55.429 10.705 - 10.764: 31.0028% ( 24) 00:14:55.429 10.764 - 10.822: 31.1830% ( 21) 00:14:55.429 10.822 - 10.880: 31.3460% ( 19) 00:14:55.429 10.880 - 10.938: 31.4575% ( 13) 00:14:55.429 10.938 - 10.996: 31.5776% ( 14) 00:14:55.429 10.996 - 11.055: 31.6805% ( 12) 00:14:55.429 11.055 - 11.113: 31.7492% ( 8) 00:14:55.429 11.113 - 11.171: 31.8264% ( 9) 00:14:55.429 11.171 - 11.229: 31.8607% ( 4) 00:14:55.429 11.229 - 11.287: 31.8950% ( 4) 00:14:55.429 11.287 - 11.345: 31.9808% ( 10) 00:14:55.429 11.345 - 11.404: 32.0751% ( 11) 00:14:55.429 11.404 - 11.462: 32.1095% ( 4) 00:14:55.429 11.462 - 11.520: 32.1438% ( 4) 00:14:55.429 11.520 - 11.578: 32.1609% ( 2) 00:14:55.429 11.578 - 11.636: 32.2381% ( 9) 00:14:55.429 11.636 - 11.695: 32.2982% ( 7) 00:14:55.429 11.695 - 11.753: 32.3497% ( 6) 00:14:55.429 11.753 - 11.811: 32.4097% ( 7) 00:14:55.429 11.811 - 11.869: 32.5041% ( 11) 00:14:55.429 11.869 - 11.927: 32.5984% ( 11) 00:14:55.429 11.927 - 11.985: 32.6756% ( 9) 00:14:55.429 11.985 - 12.044: 32.7872% ( 13) 00:14:55.429 12.044 - 12.102: 32.9158% ( 15) 00:14:55.429 12.102 - 12.160: 33.0188% ( 12) 00:14:55.429 12.160 - 12.218: 33.1904% ( 20) 00:14:55.429 12.218 - 12.276: 33.3448% ( 18) 00:14:55.429 12.276 - 12.335: 33.4734% ( 15) 00:14:55.429 12.335 - 12.393: 33.5592% ( 10) 00:14:55.429 12.393 - 12.451: 33.6879% ( 15) 00:14:55.429 12.451 - 12.509: 33.7823% ( 11) 00:14:55.429 12.509 - 12.567: 33.8595% ( 9) 00:14:55.429 12.567 - 12.625: 33.9624% ( 12) 00:14:55.429 12.625 - 12.684: 34.0311% ( 8) 00:14:55.429 12.684 - 12.742: 34.0825% ( 6) 00:14:55.429 12.742 - 12.800: 34.1597% ( 9) 00:14:55.429 12.800 - 12.858: 34.2369% ( 9) 00:14:55.429 12.858 - 12.916: 34.3485% ( 13) 00:14:55.429 12.916 - 12.975: 34.4428% ( 11) 00:14:55.429 12.975 - 13.033: 34.5029% ( 7) 00:14:55.429 13.033 - 13.091: 34.5972% ( 11) 00:14:55.429 13.091 - 13.149: 34.6744% ( 9) 00:14:55.429 13.149 - 13.207: 34.7345% ( 7) 00:14:55.429 13.207 - 13.265: 34.7945% ( 7) 00:14:55.429 13.265 - 13.324: 34.8546% ( 7) 00:14:55.429 13.324 - 13.382: 34.9919% ( 16) 00:14:55.429 13.382 - 13.440: 35.0691% ( 9) 00:14:55.429 13.440 - 13.498: 35.1377% ( 8) 00:14:55.429 13.498 - 13.556: 35.2578% ( 14) 00:14:55.429 13.556 - 13.615: 35.3693% ( 13) 00:14:55.429 13.615 - 13.673: 35.5151% ( 17) 00:14:55.429 13.673 - 13.731: 35.6610% ( 17) 00:14:55.429 13.731 - 13.789: 35.7982% ( 16) 00:14:55.429 13.789 - 13.847: 36.0727% ( 32) 00:14:55.429 13.847 - 13.905: 36.2958% ( 26) 00:14:55.429 13.905 - 13.964: 36.6389% ( 40) 00:14:55.429 13.964 - 14.022: 36.9478% ( 36) 00:14:55.429 14.022 - 14.080: 37.2394% ( 34) 00:14:55.429 14.080 - 14.138: 37.6255% ( 45) 00:14:55.429 14.138 - 14.196: 38.1831% ( 65) 00:14:55.429 14.196 - 14.255: 38.7921% ( 71) 00:14:55.429 14.255 - 14.313: 39.3841% ( 69) 00:14:55.429 14.313 - 14.371: 40.1904% ( 94) 00:14:55.429 14.371 - 14.429: 40.8681% ( 79) 00:14:55.429 14.429 - 14.487: 41.7003% ( 97) 00:14:55.429 14.487 - 14.545: 42.5152% ( 95) 00:14:55.429 14.545 - 14.604: 43.5532% ( 121) 00:14:55.429 14.604 - 14.662: 44.5998% ( 122) 00:14:55.429 14.662 - 14.720: 45.8694% ( 148) 00:14:55.429 14.720 - 14.778: 47.1133% ( 145) 00:14:55.429 14.778 - 14.836: 48.4430% ( 155) 00:14:55.429 14.836 - 14.895: 49.6955% ( 146) 00:14:55.429 14.895 - 15.011: 52.3805% ( 313) 00:14:55.429 15.011 - 15.127: 54.9284% ( 297) 00:14:55.429 15.127 - 15.244: 57.5791% ( 309) 00:14:55.429 15.244 - 15.360: 59.8610% ( 266) 00:14:55.429 15.360 - 15.476: 62.4689% ( 304) 00:14:55.429 15.476 - 15.593: 64.8108% ( 273) 00:14:55.429 15.593 - 15.709: 66.9297% ( 247) 00:14:55.429 15.709 - 15.825: 68.9114% ( 231) 00:14:55.429 15.825 - 15.942: 71.0646% ( 251) 00:14:55.429 15.942 - 16.058: 73.0205% ( 228) 00:14:55.429 16.058 - 16.175: 74.8306% ( 211) 00:14:55.429 16.175 - 16.291: 76.6063% ( 207) 00:14:55.429 16.291 - 16.407: 78.2448% ( 191) 00:14:55.429 16.407 - 16.524: 79.8833% ( 191) 00:14:55.429 16.524 - 16.640: 81.3846% ( 175) 00:14:55.429 16.640 - 16.756: 82.6456% ( 147) 00:14:55.429 16.756 - 16.873: 83.7265% ( 126) 00:14:55.429 16.873 - 16.989: 84.7988% ( 125) 00:14:55.429 16.989 - 17.105: 85.8368% ( 121) 00:14:55.429 17.105 - 17.222: 86.6861% ( 99) 00:14:55.429 17.222 - 17.338: 87.4582% ( 90) 00:14:55.429 17.338 - 17.455: 88.0501% ( 69) 00:14:55.429 17.455 - 17.571: 88.7021% ( 76) 00:14:55.429 17.571 - 17.687: 89.4827% ( 91) 00:14:55.429 17.687 - 17.804: 90.0060% ( 61) 00:14:55.429 17.804 - 17.920: 90.6237% ( 72) 00:14:55.429 17.920 - 18.036: 91.1727% ( 64) 00:14:55.429 18.036 - 18.153: 91.8075% ( 74) 00:14:55.429 18.153 - 18.269: 92.3308% ( 61) 00:14:55.429 18.269 - 18.385: 92.7597% ( 50) 00:14:55.429 18.385 - 18.502: 93.2658% ( 59) 00:14:55.429 18.502 - 18.618: 93.8320% ( 66) 00:14:55.429 18.618 - 18.735: 94.3039% ( 55) 00:14:55.429 18.735 - 18.851: 94.7928% ( 57) 00:14:55.429 18.851 - 18.967: 95.2646% ( 55) 00:14:55.429 18.967 - 19.084: 95.5735% ( 36) 00:14:55.429 19.084 - 19.200: 95.8223% ( 29) 00:14:55.429 19.200 - 19.316: 95.9938% ( 20) 00:14:55.429 19.316 - 19.433: 96.3284% ( 39) 00:14:55.429 19.433 - 19.549: 96.5343% ( 24) 00:14:55.429 19.549 - 19.665: 96.7230% ( 22) 00:14:55.429 19.665 - 19.782: 96.8431% ( 14) 00:14:55.429 19.782 - 19.898: 96.9289% ( 10) 00:14:55.429 19.898 - 20.015: 97.0232% ( 11) 00:14:55.429 20.015 - 20.131: 97.1262% ( 12) 00:14:55.429 20.131 - 20.247: 97.1605% ( 4) 00:14:55.429 20.247 - 20.364: 97.2291% ( 8) 00:14:55.429 20.364 - 20.480: 97.2634% ( 4) 00:14:55.429 20.480 - 20.596: 97.3063% ( 5) 00:14:55.429 20.596 - 20.713: 97.3578% ( 6) 00:14:55.429 20.713 - 20.829: 97.4007% ( 5) 00:14:55.429 20.829 - 20.945: 97.4264% ( 3) 00:14:55.429 20.945 - 21.062: 97.4522% ( 3) 00:14:55.429 21.062 - 21.178: 97.5122% ( 7) 00:14:55.429 21.178 - 21.295: 97.5551% ( 5) 00:14:55.429 21.295 - 21.411: 97.6066% ( 6) 00:14:55.429 21.411 - 21.527: 97.6323% ( 3) 00:14:55.429 21.527 - 21.644: 97.6581% ( 3) 00:14:55.429 21.644 - 21.760: 97.6752% ( 2) 00:14:55.429 21.876 - 21.993: 97.7095% ( 4) 00:14:55.429 21.993 - 22.109: 97.7438% ( 4) 00:14:55.429 22.109 - 22.225: 97.7524% ( 1) 00:14:55.429 22.225 - 22.342: 97.7610% ( 1) 00:14:55.429 22.458 - 22.575: 97.7696% ( 1) 00:14:55.429 22.691 - 22.807: 97.7782% ( 1) 00:14:55.429 22.807 - 22.924: 97.7953% ( 2) 00:14:55.429 22.924 - 23.040: 97.8039% ( 1) 00:14:55.429 23.040 - 23.156: 97.8125% ( 1) 00:14:55.429 23.156 - 23.273: 97.8296% ( 2) 00:14:55.429 23.273 - 23.389: 97.8468% ( 2) 00:14:55.429 23.389 - 23.505: 97.8639% ( 2) 00:14:55.429 23.738 - 23.855: 97.8725% ( 1) 00:14:55.429 23.971 - 24.087: 97.9240% ( 6) 00:14:55.429 24.087 - 24.204: 97.9326% ( 1) 00:14:55.429 24.204 - 24.320: 97.9412% ( 1) 00:14:55.429 24.320 - 24.436: 97.9669% ( 3) 00:14:55.429 24.436 - 24.553: 97.9755% ( 1) 00:14:55.429 24.553 - 24.669: 97.9840% ( 1) 00:14:55.429 24.669 - 24.785: 97.9926% ( 1) 00:14:55.429 24.785 - 24.902: 98.0355% ( 5) 00:14:55.429 25.018 - 25.135: 98.0613% ( 3) 00:14:55.429 25.135 - 25.251: 98.0956% ( 4) 00:14:55.429 25.251 - 25.367: 98.1127% ( 2) 00:14:55.429 25.367 - 25.484: 98.1213% ( 1) 00:14:55.429 25.484 - 25.600: 98.1299% ( 1) 00:14:55.429 25.600 - 25.716: 98.1385% ( 1) 00:14:55.429 25.833 - 25.949: 98.1556% ( 2) 00:14:55.429 26.065 - 26.182: 98.1728% ( 2) 00:14:55.429 26.182 - 26.298: 98.2071% ( 4) 00:14:55.429 26.298 - 26.415: 98.2242% ( 2) 00:14:55.429 26.415 - 26.531: 98.2500% ( 3) 00:14:55.429 26.531 - 26.647: 98.2757% ( 3) 00:14:55.429 26.764 - 26.880: 98.2843% ( 1) 00:14:55.429 27.113 - 27.229: 98.3014% ( 2) 00:14:55.429 27.229 - 27.345: 98.3100% ( 1) 00:14:55.429 27.345 - 27.462: 98.3186% ( 1) 00:14:55.429 27.462 - 27.578: 98.3358% ( 2) 00:14:55.429 27.927 - 28.044: 98.3615% ( 3) 00:14:55.429 28.160 - 28.276: 98.3701% ( 1) 00:14:55.429 28.276 - 28.393: 98.3787% ( 1) 00:14:55.429 28.393 - 28.509: 98.3958% ( 2) 00:14:55.429 28.509 - 28.625: 98.4130% ( 2) 00:14:55.429 28.742 - 28.858: 98.4473% ( 4) 00:14:55.429 28.858 - 28.975: 98.4559% ( 1) 00:14:55.429 28.975 - 29.091: 98.4644% ( 1) 00:14:55.429 29.091 - 29.207: 98.4902% ( 3) 00:14:55.429 29.207 - 29.324: 98.5245% ( 4) 00:14:55.429 29.324 - 29.440: 98.5331% ( 1) 00:14:55.429 29.440 - 29.556: 98.5502% ( 2) 00:14:55.429 29.556 - 29.673: 98.5760% ( 3) 00:14:55.429 29.673 - 29.789: 98.6189% ( 5) 00:14:55.429 29.789 - 30.022: 98.6875% ( 8) 00:14:55.430 30.022 - 30.255: 98.7561% ( 8) 00:14:55.430 30.255 - 30.487: 98.8076% ( 6) 00:14:55.430 30.487 - 30.720: 98.8934% ( 10) 00:14:55.430 30.720 - 30.953: 98.9877% ( 11) 00:14:55.430 30.953 - 31.185: 99.0993% ( 13) 00:14:55.430 31.185 - 31.418: 99.1593% ( 7) 00:14:55.430 31.418 - 31.651: 99.2279% ( 8) 00:14:55.430 31.651 - 31.884: 99.3309% ( 12) 00:14:55.430 31.884 - 32.116: 99.3652% ( 4) 00:14:55.430 32.116 - 32.349: 99.4167% ( 6) 00:14:55.430 32.349 - 32.582: 99.4853% ( 8) 00:14:55.430 32.582 - 32.815: 99.5453% ( 7) 00:14:55.430 32.815 - 33.047: 99.5968% ( 6) 00:14:55.430 33.047 - 33.280: 99.6140% ( 2) 00:14:55.430 33.513 - 33.745: 99.6397% ( 3) 00:14:55.430 33.745 - 33.978: 99.6826% ( 5) 00:14:55.430 33.978 - 34.211: 99.7083% ( 3) 00:14:55.430 34.211 - 34.444: 99.7169% ( 1) 00:14:55.430 34.444 - 34.676: 99.7598% ( 5) 00:14:55.430 34.676 - 34.909: 99.7684% ( 1) 00:14:55.430 34.909 - 35.142: 99.8027% ( 4) 00:14:55.430 35.375 - 35.607: 99.8199% ( 2) 00:14:55.430 35.607 - 35.840: 99.8284% ( 1) 00:14:55.430 35.840 - 36.073: 99.8456% ( 2) 00:14:55.430 36.073 - 36.305: 99.8627% ( 2) 00:14:55.430 36.771 - 37.004: 99.8799% ( 2) 00:14:55.430 37.236 - 37.469: 99.8971% ( 2) 00:14:55.430 37.469 - 37.702: 99.9056% ( 1) 00:14:55.430 37.935 - 38.167: 99.9142% ( 1) 00:14:55.430 38.167 - 38.400: 99.9228% ( 1) 00:14:55.430 40.495 - 40.727: 99.9314% ( 1) 00:14:55.430 41.425 - 41.658: 99.9400% ( 1) 00:14:55.430 41.891 - 42.124: 99.9485% ( 1) 00:14:55.430 42.124 - 42.356: 99.9571% ( 1) 00:14:55.430 65.164 - 65.629: 99.9657% ( 1) 00:14:55.430 72.145 - 72.611: 99.9743% ( 1) 00:14:55.430 79.127 - 79.593: 99.9828% ( 1) 00:14:55.430 80.058 - 80.524: 99.9914% ( 1) 00:14:55.430 100.073 - 100.538: 100.0000% ( 1) 00:14:55.430 00:14:55.430 ************************************ 00:14:55.430 END TEST nvme_overhead 00:14:55.430 ************************************ 00:14:55.430 00:14:55.430 real 0m1.368s 00:14:55.430 user 0m1.122s 00:14:55.430 sys 0m0.188s 00:14:55.430 13:34:47 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.430 13:34:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:55.430 13:34:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:55.430 13:34:47 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:55.430 13:34:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.430 13:34:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.430 ************************************ 00:14:55.430 START TEST nvme_arbitration 00:14:55.430 ************************************ 00:14:55.430 13:34:47 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:58.718 Initializing NVMe Controllers 00:14:58.718 Attached to 0000:00:10.0 00:14:58.718 Attached to 0000:00:11.0 00:14:58.718 Attached to 0000:00:13.0 00:14:58.718 Attached to 0000:00:12.0 00:14:58.718 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:58.718 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:58.718 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:58.718 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:58.719 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:58.719 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:58.719 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:58.719 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:58.719 Initialization complete. Launching workers. 00:14:58.719 Starting thread on core 1 with urgent priority queue 00:14:58.719 Starting thread on core 2 with urgent priority queue 00:14:58.719 Starting thread on core 3 with urgent priority queue 00:14:58.719 Starting thread on core 0 with urgent priority queue 00:14:58.719 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:58.719 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:58.719 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:14:58.719 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:14:58.719 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:14:58.719 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:14:58.719 ======================================================== 00:14:58.719 00:14:58.719 ************************************ 00:14:58.719 END TEST nvme_arbitration 00:14:58.719 ************************************ 00:14:58.719 00:14:58.719 real 0m3.466s 00:14:58.719 user 0m9.318s 00:14:58.719 sys 0m0.203s 00:14:58.719 13:34:50 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.719 13:34:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:58.981 13:34:50 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:58.981 13:34:50 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:58.981 13:34:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.981 13:34:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.981 ************************************ 00:14:58.981 START TEST nvme_single_aen 00:14:58.981 ************************************ 00:14:58.981 13:34:50 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:59.240 Asynchronous Event Request test 00:14:59.240 Attached to 0000:00:10.0 00:14:59.240 Attached to 0000:00:11.0 00:14:59.240 Attached to 0000:00:13.0 00:14:59.240 Attached to 0000:00:12.0 00:14:59.240 Reset controller to setup AER completions for this process 00:14:59.240 Registering asynchronous event callbacks... 00:14:59.240 Getting orig temperature thresholds of all controllers 00:14:59.240 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:59.240 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:59.240 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:59.240 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:59.240 Setting all controllers temperature threshold low to trigger AER 00:14:59.240 Waiting for all controllers temperature threshold to be set lower 00:14:59.240 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:59.240 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:59.240 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:59.240 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:59.240 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:59.240 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:59.240 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:59.240 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:59.240 Waiting for all controllers to trigger AER and reset threshold 00:14:59.240 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:59.240 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:59.240 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:59.240 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:59.240 Cleaning up... 00:14:59.240 00:14:59.240 real 0m0.310s 00:14:59.241 user 0m0.130s 00:14:59.241 sys 0m0.136s 00:14:59.241 13:34:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.241 ************************************ 00:14:59.241 END TEST nvme_single_aen 00:14:59.241 ************************************ 00:14:59.241 13:34:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:59.241 13:34:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:59.241 13:34:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.241 13:34:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.241 13:34:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.241 ************************************ 00:14:59.241 START TEST nvme_doorbell_aers 00:14:59.241 ************************************ 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:59.241 13:34:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:59.499 [2024-11-20 13:34:51.520010] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:09.462 Executing: test_write_invalid_db 00:15:09.463 Waiting for AER completion... 00:15:09.463 Failure: test_write_invalid_db 00:15:09.463 00:15:09.463 Executing: test_invalid_db_write_overflow_sq 00:15:09.463 Waiting for AER completion... 00:15:09.463 Failure: test_invalid_db_write_overflow_sq 00:15:09.463 00:15:09.463 Executing: test_invalid_db_write_overflow_cq 00:15:09.463 Waiting for AER completion... 00:15:09.463 Failure: test_invalid_db_write_overflow_cq 00:15:09.463 00:15:09.463 13:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:09.463 13:35:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:09.720 [2024-11-20 13:35:01.607051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:19.766 Executing: test_write_invalid_db 00:15:19.766 Waiting for AER completion... 00:15:19.766 Failure: test_write_invalid_db 00:15:19.766 00:15:19.766 Executing: test_invalid_db_write_overflow_sq 00:15:19.766 Waiting for AER completion... 00:15:19.766 Failure: test_invalid_db_write_overflow_sq 00:15:19.766 00:15:19.766 Executing: test_invalid_db_write_overflow_cq 00:15:19.766 Waiting for AER completion... 00:15:19.766 Failure: test_invalid_db_write_overflow_cq 00:15:19.766 00:15:19.766 13:35:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:19.766 13:35:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:19.766 [2024-11-20 13:35:11.604325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:29.749 Executing: test_write_invalid_db 00:15:29.749 Waiting for AER completion... 00:15:29.749 Failure: test_write_invalid_db 00:15:29.749 00:15:29.749 Executing: test_invalid_db_write_overflow_sq 00:15:29.749 Waiting for AER completion... 00:15:29.749 Failure: test_invalid_db_write_overflow_sq 00:15:29.749 00:15:29.749 Executing: test_invalid_db_write_overflow_cq 00:15:29.749 Waiting for AER completion... 00:15:29.749 Failure: test_invalid_db_write_overflow_cq 00:15:29.749 00:15:29.749 13:35:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:29.749 13:35:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:29.749 [2024-11-20 13:35:21.671108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 Executing: test_write_invalid_db 00:15:39.719 Waiting for AER completion... 00:15:39.719 Failure: test_write_invalid_db 00:15:39.719 00:15:39.719 Executing: test_invalid_db_write_overflow_sq 00:15:39.719 Waiting for AER completion... 00:15:39.719 Failure: test_invalid_db_write_overflow_sq 00:15:39.719 00:15:39.719 Executing: test_invalid_db_write_overflow_cq 00:15:39.719 Waiting for AER completion... 00:15:39.719 Failure: test_invalid_db_write_overflow_cq 00:15:39.719 00:15:39.719 00:15:39.719 real 0m40.269s 00:15:39.719 user 0m34.006s 00:15:39.719 sys 0m5.796s 00:15:39.719 13:35:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.719 13:35:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:15:39.719 ************************************ 00:15:39.719 END TEST nvme_doorbell_aers 00:15:39.719 ************************************ 00:15:39.719 13:35:31 nvme -- nvme/nvme.sh@97 -- # uname 00:15:39.719 13:35:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:15:39.719 13:35:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:39.719 13:35:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:39.719 13:35:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.719 13:35:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.719 ************************************ 00:15:39.719 START TEST nvme_multi_aen 00:15:39.719 ************************************ 00:15:39.719 13:35:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:39.719 [2024-11-20 13:35:31.716009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.716352] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.716381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.718104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.718159] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.718178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.719574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.719769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.719795] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.721255] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.721304] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 [2024-11-20 13:35:31.721322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64888) is not found. Dropping the request. 00:15:39.719 Child process pid: 65405 00:15:40.285 [Child] Asynchronous Event Request test 00:15:40.285 [Child] Attached to 0000:00:10.0 00:15:40.285 [Child] Attached to 0000:00:11.0 00:15:40.285 [Child] Attached to 0000:00:13.0 00:15:40.286 [Child] Attached to 0000:00:12.0 00:15:40.286 [Child] Registering asynchronous event callbacks... 00:15:40.286 [Child] Getting orig temperature thresholds of all controllers 00:15:40.286 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 [Child] Waiting for all controllers to trigger AER and reset threshold 00:15:40.286 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 [Child] Cleaning up... 00:15:40.286 Asynchronous Event Request test 00:15:40.286 Attached to 0000:00:10.0 00:15:40.286 Attached to 0000:00:11.0 00:15:40.286 Attached to 0000:00:13.0 00:15:40.286 Attached to 0000:00:12.0 00:15:40.286 Reset controller to setup AER completions for this process 00:15:40.286 Registering asynchronous event callbacks... 00:15:40.286 Getting orig temperature thresholds of all controllers 00:15:40.286 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:40.286 Setting all controllers temperature threshold low to trigger AER 00:15:40.286 Waiting for all controllers temperature threshold to be set lower 00:15:40.286 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:40.286 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:40.286 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:40.286 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:40.286 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:40.286 Waiting for all controllers to trigger AER and reset threshold 00:15:40.286 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:40.286 Cleaning up... 00:15:40.286 00:15:40.286 real 0m0.669s 00:15:40.286 user 0m0.246s 00:15:40.286 sys 0m0.314s 00:15:40.286 13:35:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.286 13:35:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 ************************************ 00:15:40.286 END TEST nvme_multi_aen 00:15:40.286 ************************************ 00:15:40.286 13:35:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:40.286 13:35:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:40.286 13:35:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.286 13:35:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 ************************************ 00:15:40.286 START TEST nvme_startup 00:15:40.286 ************************************ 00:15:40.286 13:35:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:40.544 Initializing NVMe Controllers 00:15:40.544 Attached to 0000:00:10.0 00:15:40.544 Attached to 0000:00:11.0 00:15:40.544 Attached to 0000:00:13.0 00:15:40.544 Attached to 0000:00:12.0 00:15:40.544 Initialization complete. 00:15:40.544 Time used:246743.922 (us). 00:15:40.544 ************************************ 00:15:40.544 END TEST nvme_startup 00:15:40.544 ************************************ 00:15:40.544 00:15:40.544 real 0m0.348s 00:15:40.544 user 0m0.132s 00:15:40.544 sys 0m0.170s 00:15:40.544 13:35:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.544 13:35:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:15:40.544 13:35:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:15:40.544 13:35:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.544 13:35:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.544 13:35:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.544 ************************************ 00:15:40.544 START TEST nvme_multi_secondary 00:15:40.544 ************************************ 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65461 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65462 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:40.544 13:35:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:15:43.887 Initializing NVMe Controllers 00:15:43.887 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:43.887 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:43.887 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:43.887 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:43.887 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:43.887 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:43.887 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:43.887 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:43.887 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:43.887 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:43.887 Initialization complete. Launching workers. 00:15:43.887 ======================================================== 00:15:43.887 Latency(us) 00:15:43.887 Device Information : IOPS MiB/s Average min max 00:15:43.887 PCIE (0000:00:10.0) NSID 1 from core 2: 2112.72 8.25 7571.10 974.79 26922.37 00:15:43.887 PCIE (0000:00:11.0) NSID 1 from core 2: 2112.72 8.25 7572.49 971.05 27120.09 00:15:43.887 PCIE (0000:00:13.0) NSID 1 from core 2: 2112.72 8.25 7572.41 988.98 26876.85 00:15:43.887 PCIE (0000:00:12.0) NSID 1 from core 2: 2112.72 8.25 7570.91 1016.65 26659.57 00:15:43.887 PCIE (0000:00:12.0) NSID 2 from core 2: 2112.72 8.25 7563.33 994.32 26284.74 00:15:43.887 PCIE (0000:00:12.0) NSID 3 from core 2: 2118.04 8.27 7545.33 1003.11 26357.62 00:15:43.887 ======================================================== 00:15:43.887 Total : 12681.63 49.54 7565.92 971.05 27120.09 00:15:43.887 00:15:44.145 13:35:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65461 00:15:44.145 Initializing NVMe Controllers 00:15:44.145 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:44.145 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:44.145 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:44.145 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:44.145 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:44.145 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:44.145 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:44.145 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:44.145 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:44.145 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:44.145 Initialization complete. Launching workers. 00:15:44.145 ======================================================== 00:15:44.145 Latency(us) 00:15:44.145 Device Information : IOPS MiB/s Average min max 00:15:44.145 PCIE (0000:00:10.0) NSID 1 from core 1: 4764.72 18.61 3355.76 1315.11 14092.72 00:15:44.145 PCIE (0000:00:11.0) NSID 1 from core 1: 4764.72 18.61 3357.26 1377.72 13943.36 00:15:44.145 PCIE (0000:00:13.0) NSID 1 from core 1: 4764.72 18.61 3357.09 1368.76 14092.65 00:15:44.145 PCIE (0000:00:12.0) NSID 1 from core 1: 4764.72 18.61 3356.90 1324.42 13989.18 00:15:44.145 PCIE (0000:00:12.0) NSID 2 from core 1: 4764.72 18.61 3356.73 1397.93 12557.81 00:15:44.145 PCIE (0000:00:12.0) NSID 3 from core 1: 4764.72 18.61 3356.66 1332.47 13249.67 00:15:44.145 ======================================================== 00:15:44.145 Total : 28588.33 111.67 3356.73 1315.11 14092.72 00:15:44.145 00:15:46.044 Initializing NVMe Controllers 00:15:46.044 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:46.044 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:46.044 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:46.044 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:46.044 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:46.044 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:46.044 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:46.044 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:46.044 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:46.044 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:46.044 Initialization complete. Launching workers. 00:15:46.044 ======================================================== 00:15:46.044 Latency(us) 00:15:46.044 Device Information : IOPS MiB/s Average min max 00:15:46.044 PCIE (0000:00:10.0) NSID 1 from core 0: 6116.94 23.89 2613.53 963.96 13001.28 00:15:46.044 PCIE (0000:00:11.0) NSID 1 from core 0: 6116.94 23.89 2615.02 971.84 13018.52 00:15:46.044 PCIE (0000:00:13.0) NSID 1 from core 0: 6116.94 23.89 2614.96 978.00 13115.51 00:15:46.044 PCIE (0000:00:12.0) NSID 1 from core 0: 6116.94 23.89 2614.91 997.18 11771.06 00:15:46.044 PCIE (0000:00:12.0) NSID 2 from core 0: 6116.94 23.89 2614.85 919.20 11843.43 00:15:46.044 PCIE (0000:00:12.0) NSID 3 from core 0: 6116.94 23.89 2614.80 811.30 11722.16 00:15:46.044 ======================================================== 00:15:46.044 Total : 36701.62 143.37 2614.68 811.30 13115.51 00:15:46.044 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65462 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65532 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65533 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:46.044 13:35:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:15:49.324 Initializing NVMe Controllers 00:15:49.324 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:49.324 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:49.324 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:49.324 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:49.324 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:49.324 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:49.324 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:49.325 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:49.325 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:49.325 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:49.325 Initialization complete. Launching workers. 00:15:49.325 ======================================================== 00:15:49.325 Latency(us) 00:15:49.325 Device Information : IOPS MiB/s Average min max 00:15:49.325 PCIE (0000:00:10.0) NSID 1 from core 0: 4485.90 17.52 3564.49 1307.30 11166.29 00:15:49.325 PCIE (0000:00:11.0) NSID 1 from core 0: 4485.90 17.52 3566.28 1308.83 11026.28 00:15:49.325 PCIE (0000:00:13.0) NSID 1 from core 0: 4485.90 17.52 3566.43 1276.38 11554.19 00:15:49.325 PCIE (0000:00:12.0) NSID 1 from core 0: 4485.90 17.52 3566.37 1312.03 13871.14 00:15:49.325 PCIE (0000:00:12.0) NSID 2 from core 0: 4491.22 17.54 3562.07 1374.68 10359.06 00:15:49.325 PCIE (0000:00:12.0) NSID 3 from core 0: 4491.22 17.54 3562.14 1295.36 11334.90 00:15:49.325 ======================================================== 00:15:49.325 Total : 26926.03 105.18 3564.63 1276.38 13871.14 00:15:49.325 00:15:49.582 Initializing NVMe Controllers 00:15:49.582 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:49.582 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:49.582 Initialization complete. Launching workers. 00:15:49.582 ======================================================== 00:15:49.582 Latency(us) 00:15:49.582 Device Information : IOPS MiB/s Average min max 00:15:49.582 PCIE (0000:00:10.0) NSID 1 from core 1: 4561.66 17.82 3505.45 1171.45 11567.08 00:15:49.582 PCIE (0000:00:11.0) NSID 1 from core 1: 4561.66 17.82 3508.04 1297.38 11317.24 00:15:49.582 PCIE (0000:00:13.0) NSID 1 from core 1: 4561.66 17.82 3508.03 1312.05 11641.57 00:15:49.582 PCIE (0000:00:12.0) NSID 1 from core 1: 4561.66 17.82 3507.97 1325.53 13670.61 00:15:49.582 PCIE (0000:00:12.0) NSID 2 from core 1: 4561.66 17.82 3507.93 1339.66 13587.03 00:15:49.582 PCIE (0000:00:12.0) NSID 3 from core 1: 4566.99 17.84 3503.78 1241.09 11396.20 00:15:49.582 ======================================================== 00:15:49.582 Total : 27375.30 106.93 3506.87 1171.45 13670.61 00:15:49.582 00:15:52.109 Initializing NVMe Controllers 00:15:52.109 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:52.109 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:52.109 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:52.109 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:52.109 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:52.109 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:52.109 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:52.109 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:52.109 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:52.109 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:52.109 Initialization complete. Launching workers. 00:15:52.109 ======================================================== 00:15:52.109 Latency(us) 00:15:52.109 Device Information : IOPS MiB/s Average min max 00:15:52.109 PCIE (0000:00:10.0) NSID 1 from core 2: 3381.70 13.21 4728.60 921.36 24186.62 00:15:52.109 PCIE (0000:00:11.0) NSID 1 from core 2: 3381.70 13.21 4730.78 951.41 24018.11 00:15:52.109 PCIE (0000:00:13.0) NSID 1 from core 2: 3381.70 13.21 4730.42 968.14 23705.81 00:15:52.109 PCIE (0000:00:12.0) NSID 1 from core 2: 3381.70 13.21 4729.97 978.49 27135.86 00:15:52.109 PCIE (0000:00:12.0) NSID 2 from core 2: 3381.70 13.21 4730.24 973.25 27400.19 00:15:52.109 PCIE (0000:00:12.0) NSID 3 from core 2: 3381.70 13.21 4729.56 955.10 27026.53 00:15:52.109 ======================================================== 00:15:52.109 Total : 20290.19 79.26 4729.93 921.36 27400.19 00:15:52.109 00:15:52.109 ************************************ 00:15:52.109 END TEST nvme_multi_secondary 00:15:52.109 ************************************ 00:15:52.109 13:35:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65532 00:15:52.109 13:35:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65533 00:15:52.109 00:15:52.109 real 0m11.263s 00:15:52.109 user 0m18.672s 00:15:52.109 sys 0m1.110s 00:15:52.109 13:35:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.109 13:35:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:15:52.109 13:35:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:15:52.109 13:35:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:15:52.109 13:35:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64468 ]] 00:15:52.109 13:35:43 nvme -- common/autotest_common.sh@1094 -- # kill 64468 00:15:52.109 13:35:43 nvme -- common/autotest_common.sh@1095 -- # wait 64468 00:15:52.109 [2024-11-20 13:35:43.851840] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.851928] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.851969] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.851992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.854293] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.854377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.854403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.854435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.856713] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.856771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.856794] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.856815] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.859773] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.860076] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.860114] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:43.860142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65404) is not found. Dropping the request. 00:15:52.109 [2024-11-20 13:35:44.021316] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:15:52.109 13:35:44 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:15:52.109 13:35:44 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:15:52.109 13:35:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:52.109 13:35:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:52.109 13:35:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.109 13:35:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.109 ************************************ 00:15:52.109 START TEST bdev_nvme_reset_stuck_adm_cmd 00:15:52.109 ************************************ 00:15:52.109 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:52.109 * Looking for test storage... 00:15:52.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:52.109 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:52.109 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:15:52.109 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:52.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.369 --rc genhtml_branch_coverage=1 00:15:52.369 --rc genhtml_function_coverage=1 00:15:52.369 --rc genhtml_legend=1 00:15:52.369 --rc geninfo_all_blocks=1 00:15:52.369 --rc geninfo_unexecuted_blocks=1 00:15:52.369 00:15:52.369 ' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:52.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.369 --rc genhtml_branch_coverage=1 00:15:52.369 --rc genhtml_function_coverage=1 00:15:52.369 --rc genhtml_legend=1 00:15:52.369 --rc geninfo_all_blocks=1 00:15:52.369 --rc geninfo_unexecuted_blocks=1 00:15:52.369 00:15:52.369 ' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:52.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.369 --rc genhtml_branch_coverage=1 00:15:52.369 --rc genhtml_function_coverage=1 00:15:52.369 --rc genhtml_legend=1 00:15:52.369 --rc geninfo_all_blocks=1 00:15:52.369 --rc geninfo_unexecuted_blocks=1 00:15:52.369 00:15:52.369 ' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:52.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.369 --rc genhtml_branch_coverage=1 00:15:52.369 --rc genhtml_function_coverage=1 00:15:52.369 --rc genhtml_legend=1 00:15:52.369 --rc geninfo_all_blocks=1 00:15:52.369 --rc geninfo_unexecuted_blocks=1 00:15:52.369 00:15:52.369 ' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65700 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65700 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65700 ']' 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.369 13:35:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:52.370 [2024-11-20 13:35:44.396958] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:15:52.370 [2024-11-20 13:35:44.397316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65700 ] 00:15:52.628 [2024-11-20 13:35:44.606919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.885 [2024-11-20 13:35:44.740207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.885 [2024-11-20 13:35:44.740301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.885 [2024-11-20 13:35:44.740386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.885 [2024-11-20 13:35:44.740406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 nvme0n1 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_gYmPu.txt 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 true 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732109745 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65724 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:53.828 13:35:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:55.742 [2024-11-20 13:35:47.640271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:55.742 [2024-11-20 13:35:47.640621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:55.742 [2024-11-20 13:35:47.640659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:55.742 [2024-11-20 13:35:47.640679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.742 [2024-11-20 13:35:47.642578] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:15:55.742 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65724 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65724 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65724 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_gYmPu.txt 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_gYmPu.txt 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65700 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65700 ']' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65700 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65700 00:15:55.742 killing process with pid 65700 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65700' 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65700 00:15:55.742 13:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65700 00:15:58.274 13:35:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:58.274 13:35:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:58.274 00:15:58.274 real 0m5.820s 00:15:58.274 user 0m20.593s 00:15:58.274 sys 0m0.631s 00:15:58.274 ************************************ 00:15:58.274 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:58.274 ************************************ 00:15:58.274 13:35:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.274 13:35:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:58.274 13:35:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:58.274 13:35:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:58.274 13:35:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.274 13:35:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.274 13:35:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.274 ************************************ 00:15:58.274 START TEST nvme_fio 00:15:58.274 ************************************ 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:58.274 13:35:49 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:58.274 13:35:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:58.274 13:35:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:58.274 13:35:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:58.532 13:35:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:58.532 13:35:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:58.532 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:58.791 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.791 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.791 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:58.791 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:58.791 13:35:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:58.791 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:58.791 fio-3.35 00:15:58.791 Starting 1 thread 00:16:02.078 00:16:02.078 test: (groupid=0, jobs=1): err= 0: pid=65876: Wed Nov 20 13:35:53 2024 00:16:02.078 read: IOPS=14.6k, BW=57.1MiB/s (59.8MB/s)(114MiB/2001msec) 00:16:02.078 slat (nsec): min=4722, max=63646, avg=6804.44, stdev=2224.55 00:16:02.078 clat (usec): min=315, max=10140, avg=4359.78, stdev=691.39 00:16:02.078 lat (usec): min=338, max=10178, avg=4366.58, stdev=692.18 00:16:02.078 clat percentiles (usec): 00:16:02.078 | 1.00th=[ 2999], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:16:02.078 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4293], 00:16:02.078 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5669], 00:16:02.078 | 99.00th=[ 6783], 99.50th=[ 7373], 99.90th=[ 8225], 99.95th=[ 8717], 00:16:02.078 | 99.99th=[10028] 00:16:02.078 bw ( KiB/s): min=56080, max=58840, per=98.51%, avg=57552.00, stdev=1389.17, samples=3 00:16:02.078 iops : min=14020, max=14710, avg=14388.00, stdev=347.29, samples=3 00:16:02.078 write: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(114MiB/2001msec); 0 zone resets 00:16:02.078 slat (nsec): min=4868, max=46519, avg=7079.93, stdev=2141.03 00:16:02.078 clat (usec): min=360, max=9983, avg=4363.80, stdev=689.12 00:16:02.078 lat (usec): min=367, max=10008, avg=4370.88, stdev=689.92 00:16:02.078 clat percentiles (usec): 00:16:02.078 | 1.00th=[ 3032], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:16:02.078 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4293], 00:16:02.078 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5669], 00:16:02.078 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8225], 99.95th=[ 8717], 00:16:02.078 | 99.99th=[ 9765] 00:16:02.078 bw ( KiB/s): min=56352, max=58576, per=98.11%, avg=57453.33, stdev=1112.15, samples=3 00:16:02.078 iops : min=14088, max=14644, avg=14363.33, stdev=278.04, samples=3 00:16:02.078 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:02.078 lat (msec) : 2=0.04%, 4=31.98%, 10=67.94%, 20=0.01% 00:16:02.078 cpu : usr=98.75%, sys=0.20%, ctx=4, majf=0, minf=607 00:16:02.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:02.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.078 issued rwts: total=29226,29294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.078 00:16:02.078 Run status group 0 (all jobs): 00:16:02.078 READ: bw=57.1MiB/s (59.8MB/s), 57.1MiB/s-57.1MiB/s (59.8MB/s-59.8MB/s), io=114MiB (120MB), run=2001-2001msec 00:16:02.078 WRITE: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=114MiB (120MB), run=2001-2001msec 00:16:02.078 ----------------------------------------------------- 00:16:02.078 Suppressions used: 00:16:02.078 count bytes template 00:16:02.078 1 32 /usr/src/fio/parse.c 00:16:02.078 1 8 libtcmalloc_minimal.so 00:16:02.078 ----------------------------------------------------- 00:16:02.078 00:16:02.078 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:02.078 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:02.078 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:02.078 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:02.337 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:02.337 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:02.595 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:02.595 13:35:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:02.595 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:02.596 13:35:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:02.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:02.855 fio-3.35 00:16:02.855 Starting 1 thread 00:16:06.140 00:16:06.141 test: (groupid=0, jobs=1): err= 0: pid=65937: Wed Nov 20 13:35:57 2024 00:16:06.141 read: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(110MiB/2001msec) 00:16:06.141 slat (nsec): min=4542, max=68916, avg=6871.11, stdev=2317.01 00:16:06.141 clat (usec): min=310, max=12383, avg=4507.67, stdev=764.30 00:16:06.141 lat (usec): min=319, max=12420, avg=4514.54, stdev=765.13 00:16:06.141 clat percentiles (usec): 00:16:06.141 | 1.00th=[ 2606], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 4178], 00:16:06.141 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:16:06.141 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 6259], 00:16:06.141 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[10421], 00:16:06.141 | 99.99th=[12256] 00:16:06.141 bw ( KiB/s): min=55136, max=58888, per=100.00%, avg=56789.00, stdev=1915.35, samples=3 00:16:06.141 iops : min=13784, max=14722, avg=14197.00, stdev=478.92, samples=3 00:16:06.141 write: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(111MiB/2001msec); 0 zone resets 00:16:06.141 slat (nsec): min=4710, max=86636, avg=7169.57, stdev=2355.58 00:16:06.141 clat (usec): min=457, max=12225, avg=4516.27, stdev=767.21 00:16:06.141 lat (usec): min=468, max=12238, avg=4523.44, stdev=768.03 00:16:06.141 clat percentiles (usec): 00:16:06.141 | 1.00th=[ 2638], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 4178], 00:16:06.141 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:16:06.141 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5211], 95.00th=[ 6325], 00:16:06.141 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 9110], 99.95th=[10552], 00:16:06.141 | 99.99th=[11863] 00:16:06.141 bw ( KiB/s): min=55496, max=58240, per=100.00%, avg=56778.00, stdev=1380.83, samples=3 00:16:06.141 iops : min=13874, max=14560, avg=14194.33, stdev=345.24, samples=3 00:16:06.141 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:06.141 lat (msec) : 2=0.14%, 4=13.75%, 10=86.01%, 20=0.07% 00:16:06.141 cpu : usr=98.80%, sys=0.10%, ctx=5, majf=0, minf=607 00:16:06.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:06.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.141 issued rwts: total=28278,28295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.141 00:16:06.141 Run status group 0 (all jobs): 00:16:06.141 READ: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=110MiB (116MB), run=2001-2001msec 00:16:06.141 WRITE: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=111MiB (116MB), run=2001-2001msec 00:16:06.141 ----------------------------------------------------- 00:16:06.141 Suppressions used: 00:16:06.141 count bytes template 00:16:06.141 1 32 /usr/src/fio/parse.c 00:16:06.141 1 8 libtcmalloc_minimal.so 00:16:06.141 ----------------------------------------------------- 00:16:06.141 00:16:06.141 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:06.141 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:06.141 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:06.141 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:06.400 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:06.400 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:06.658 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:06.658 13:35:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:06.658 13:35:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:06.916 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:06.916 fio-3.35 00:16:06.916 Starting 1 thread 00:16:10.277 00:16:10.277 test: (groupid=0, jobs=1): err= 0: pid=66003: Wed Nov 20 13:36:01 2024 00:16:10.277 read: IOPS=14.5k, BW=56.7MiB/s (59.5MB/s)(114MiB/2001msec) 00:16:10.277 slat (nsec): min=4553, max=44323, avg=6745.36, stdev=2243.81 00:16:10.277 clat (usec): min=358, max=8895, avg=4386.32, stdev=781.80 00:16:10.277 lat (usec): min=364, max=8901, avg=4393.07, stdev=782.71 00:16:10.277 clat percentiles (usec): 00:16:10.277 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3752], 00:16:10.277 | 30.00th=[ 4047], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:16:10.277 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 6063], 00:16:10.277 | 99.00th=[ 7439], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8455], 00:16:10.277 | 99.99th=[ 8717] 00:16:10.277 bw ( KiB/s): min=55464, max=59440, per=98.41%, avg=57184.00, stdev=2041.47, samples=3 00:16:10.277 iops : min=13868, max=14858, avg=14296.00, stdev=508.42, samples=3 00:16:10.277 write: IOPS=14.6k, BW=56.9MiB/s (59.6MB/s)(114MiB/2001msec); 0 zone resets 00:16:10.277 slat (nsec): min=4640, max=88771, avg=6883.67, stdev=2382.88 00:16:10.277 clat (usec): min=277, max=8801, avg=4388.10, stdev=777.65 00:16:10.277 lat (usec): min=284, max=8807, avg=4394.98, stdev=778.62 00:16:10.277 clat percentiles (usec): 00:16:10.277 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3785], 00:16:10.277 | 30.00th=[ 4047], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:16:10.277 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5932], 00:16:10.277 | 99.00th=[ 7439], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8455], 00:16:10.277 | 99.99th=[ 8586] 00:16:10.277 bw ( KiB/s): min=55248, max=59104, per=98.07%, avg=57104.00, stdev=1932.03, samples=3 00:16:10.277 iops : min=13812, max=14776, avg=14276.00, stdev=483.01, samples=3 00:16:10.277 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:10.277 lat (msec) : 2=0.05%, 4=28.78%, 10=71.13% 00:16:10.277 cpu : usr=98.80%, sys=0.15%, ctx=4, majf=0, minf=607 00:16:10.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:10.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.277 issued rwts: total=29068,29127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.277 00:16:10.277 Run status group 0 (all jobs): 00:16:10.277 READ: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=114MiB (119MB), run=2001-2001msec 00:16:10.277 WRITE: bw=56.9MiB/s (59.6MB/s), 56.9MiB/s-56.9MiB/s (59.6MB/s-59.6MB/s), io=114MiB (119MB), run=2001-2001msec 00:16:10.277 ----------------------------------------------------- 00:16:10.277 Suppressions used: 00:16:10.277 count bytes template 00:16:10.277 1 32 /usr/src/fio/parse.c 00:16:10.277 1 8 libtcmalloc_minimal.so 00:16:10.277 ----------------------------------------------------- 00:16:10.277 00:16:10.277 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:10.277 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:10.277 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:10.277 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:10.536 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:10.536 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:10.794 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:10.794 13:36:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:10.794 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:11.053 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:11.053 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:11.053 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:11.053 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:11.053 13:36:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:11.053 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:11.053 fio-3.35 00:16:11.053 Starting 1 thread 00:16:15.307 00:16:15.307 test: (groupid=0, jobs=1): err= 0: pid=66064: Wed Nov 20 13:36:06 2024 00:16:15.307 read: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(114MiB/2001msec) 00:16:15.307 slat (nsec): min=4566, max=46205, avg=6668.09, stdev=2237.34 00:16:15.307 clat (usec): min=383, max=9527, avg=4352.68, stdev=677.68 00:16:15.307 lat (usec): min=388, max=9557, avg=4359.35, stdev=678.58 00:16:15.307 clat percentiles (usec): 00:16:15.307 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3884], 00:16:15.307 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:16:15.307 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5407], 00:16:15.307 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[ 9372], 00:16:15.307 | 99.99th=[ 9503] 00:16:15.307 bw ( KiB/s): min=55376, max=62688, per=100.00%, avg=59418.67, stdev=3716.84, samples=3 00:16:15.307 iops : min=13844, max=15672, avg=14854.67, stdev=929.21, samples=3 00:16:15.307 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:16:15.307 slat (nsec): min=4696, max=53642, avg=6775.17, stdev=2224.88 00:16:15.307 clat (usec): min=252, max=9534, avg=4355.97, stdev=663.50 00:16:15.307 lat (usec): min=258, max=9541, avg=4362.74, stdev=664.36 00:16:15.307 clat percentiles (usec): 00:16:15.307 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3916], 00:16:15.307 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:16:15.307 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5407], 00:16:15.307 | 99.00th=[ 7177], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9241], 00:16:15.307 | 99.99th=[ 9503] 00:16:15.307 bw ( KiB/s): min=55648, max=61848, per=100.00%, avg=59229.33, stdev=3210.15, samples=3 00:16:15.307 iops : min=13912, max=15462, avg=14807.33, stdev=802.54, samples=3 00:16:15.307 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:16:15.307 lat (msec) : 2=0.06%, 4=22.67%, 10=77.23% 00:16:15.307 cpu : usr=98.75%, sys=0.15%, ctx=4, majf=0, minf=605 00:16:15.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:15.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.307 issued rwts: total=29268,29337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.307 00:16:15.307 Run status group 0 (all jobs): 00:16:15.307 READ: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:16:15.307 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:16:15.307 ----------------------------------------------------- 00:16:15.307 Suppressions used: 00:16:15.307 count bytes template 00:16:15.307 1 32 /usr/src/fio/parse.c 00:16:15.307 1 8 libtcmalloc_minimal.so 00:16:15.307 ----------------------------------------------------- 00:16:15.307 00:16:15.307 13:36:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:15.307 13:36:07 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:16:15.307 00:16:15.307 real 0m17.159s 00:16:15.307 user 0m13.559s 00:16:15.307 sys 0m2.511s 00:16:15.307 13:36:07 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.307 13:36:07 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:16:15.307 ************************************ 00:16:15.307 END TEST nvme_fio 00:16:15.307 ************************************ 00:16:15.307 00:16:15.307 real 1m31.156s 00:16:15.307 user 3m46.209s 00:16:15.307 sys 0m15.250s 00:16:15.307 13:36:07 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.307 13:36:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.307 ************************************ 00:16:15.307 END TEST nvme 00:16:15.307 ************************************ 00:16:15.307 13:36:07 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:16:15.307 13:36:07 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:15.307 13:36:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.307 13:36:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.307 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:15.307 ************************************ 00:16:15.307 START TEST nvme_scc 00:16:15.307 ************************************ 00:16:15.307 13:36:07 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:15.307 * Looking for test storage... 00:16:15.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:15.307 13:36:07 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:15.307 13:36:07 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:15.307 13:36:07 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:15.307 13:36:07 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@345 -- # : 1 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.307 13:36:07 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@368 -- # return 0 00:16:15.566 13:36:07 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.566 13:36:07 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:15.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.566 --rc genhtml_branch_coverage=1 00:16:15.566 --rc genhtml_function_coverage=1 00:16:15.566 --rc genhtml_legend=1 00:16:15.566 --rc geninfo_all_blocks=1 00:16:15.566 --rc geninfo_unexecuted_blocks=1 00:16:15.566 00:16:15.566 ' 00:16:15.566 13:36:07 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:15.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.566 --rc genhtml_branch_coverage=1 00:16:15.566 --rc genhtml_function_coverage=1 00:16:15.566 --rc genhtml_legend=1 00:16:15.566 --rc geninfo_all_blocks=1 00:16:15.566 --rc geninfo_unexecuted_blocks=1 00:16:15.566 00:16:15.566 ' 00:16:15.566 13:36:07 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:15.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.566 --rc genhtml_branch_coverage=1 00:16:15.566 --rc genhtml_function_coverage=1 00:16:15.566 --rc genhtml_legend=1 00:16:15.566 --rc geninfo_all_blocks=1 00:16:15.566 --rc geninfo_unexecuted_blocks=1 00:16:15.566 00:16:15.566 ' 00:16:15.566 13:36:07 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:15.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.566 --rc genhtml_branch_coverage=1 00:16:15.566 --rc genhtml_function_coverage=1 00:16:15.566 --rc genhtml_legend=1 00:16:15.566 --rc geninfo_all_blocks=1 00:16:15.566 --rc geninfo_unexecuted_blocks=1 00:16:15.566 00:16:15.566 ' 00:16:15.566 13:36:07 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:15.566 13:36:07 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:15.566 13:36:07 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:15.566 13:36:07 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:15.566 13:36:07 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.566 13:36:07 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.566 13:36:07 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.566 13:36:07 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.567 13:36:07 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.567 13:36:07 nvme_scc -- paths/export.sh@5 -- # export PATH 00:16:15.567 13:36:07 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:15.567 13:36:07 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:16:15.567 13:36:07 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.567 13:36:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:16:15.567 13:36:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:16:15.567 13:36:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:16:15.567 13:36:07 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:15.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.825 Waiting for block devices as requested 00:16:15.825 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:16.083 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:16.083 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:16.083 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.368 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:21.368 13:36:13 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:21.368 13:36:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:21.368 13:36:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:21.368 13:36:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.368 13:36:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.368 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:21.369 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.370 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:21.371 13:36:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:21.371 13:36:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:21.371 13:36:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.371 13:36:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:21.371 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.636 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:21.637 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.638 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.639 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.640 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.641 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:21.642 13:36:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:21.642 13:36:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:21.642 13:36:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.642 13:36:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:21.642 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:21.643 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:21.644 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:21.906 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.907 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.908 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.909 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.910 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:21.911 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.912 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:21.913 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.914 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:21.915 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:21.916 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:22.179 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:22.180 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:22.181 13:36:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:22.181 13:36:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:22.181 13:36:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:22.181 13:36:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:22.181 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:22.182 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:22.183 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:16:22.184 13:36:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:16:22.184 13:36:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:16:22.184 13:36:14 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.319 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.319 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.319 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.319 13:36:15 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:23.319 13:36:15 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:23.319 13:36:15 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.319 13:36:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:23.319 ************************************ 00:16:23.319 START TEST nvme_simple_copy 00:16:23.320 ************************************ 00:16:23.320 13:36:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:23.887 Initializing NVMe Controllers 00:16:23.887 Attaching to 0000:00:10.0 00:16:23.887 Controller supports SCC. Attached to 0000:00:10.0 00:16:23.887 Namespace ID: 1 size: 6GB 00:16:23.887 Initialization complete. 00:16:23.887 00:16:23.887 Controller QEMU NVMe Ctrl (12340 ) 00:16:23.887 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:23.887 Namespace Block Size:4096 00:16:23.887 Writing LBAs 0 to 63 with Random Data 00:16:23.887 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:23.887 LBAs matching Written Data: 64 00:16:23.887 00:16:23.887 real 0m0.328s 00:16:23.887 user 0m0.134s 00:16:23.887 sys 0m0.092s 00:16:23.887 13:36:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.887 13:36:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 ************************************ 00:16:23.887 END TEST nvme_simple_copy 00:16:23.887 ************************************ 00:16:23.887 ************************************ 00:16:23.887 END TEST nvme_scc 00:16:23.887 ************************************ 00:16:23.887 00:16:23.887 real 0m8.525s 00:16:23.887 user 0m1.686s 00:16:23.887 sys 0m1.683s 00:16:23.887 13:36:15 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.887 13:36:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 13:36:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:23.887 13:36:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:23.887 13:36:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:23.887 13:36:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:23.887 13:36:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:23.887 13:36:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.887 13:36:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.887 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 ************************************ 00:16:23.887 START TEST nvme_fdp 00:16:23.887 ************************************ 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:16:23.887 * Looking for test storage... 00:16:23.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.887 --rc genhtml_branch_coverage=1 00:16:23.887 --rc genhtml_function_coverage=1 00:16:23.887 --rc genhtml_legend=1 00:16:23.887 --rc geninfo_all_blocks=1 00:16:23.887 --rc geninfo_unexecuted_blocks=1 00:16:23.887 00:16:23.887 ' 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.887 --rc genhtml_branch_coverage=1 00:16:23.887 --rc genhtml_function_coverage=1 00:16:23.887 --rc genhtml_legend=1 00:16:23.887 --rc geninfo_all_blocks=1 00:16:23.887 --rc geninfo_unexecuted_blocks=1 00:16:23.887 00:16:23.887 ' 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.887 --rc genhtml_branch_coverage=1 00:16:23.887 --rc genhtml_function_coverage=1 00:16:23.887 --rc genhtml_legend=1 00:16:23.887 --rc geninfo_all_blocks=1 00:16:23.887 --rc geninfo_unexecuted_blocks=1 00:16:23.887 00:16:23.887 ' 00:16:23.887 13:36:15 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.887 --rc genhtml_branch_coverage=1 00:16:23.887 --rc genhtml_function_coverage=1 00:16:23.887 --rc genhtml_legend=1 00:16:23.887 --rc geninfo_all_blocks=1 00:16:23.887 --rc geninfo_unexecuted_blocks=1 00:16:23.887 00:16:23.887 ' 00:16:23.887 13:36:15 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:23.887 13:36:15 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:23.887 13:36:15 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:23.887 13:36:15 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:23.887 13:36:15 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.887 13:36:15 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.887 13:36:15 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.888 13:36:15 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.888 13:36:15 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.888 13:36:15 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:23.888 13:36:15 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:23.888 13:36:15 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:23.888 13:36:15 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.888 13:36:15 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:24.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.454 Waiting for block devices as requested 00:16:24.454 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.713 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.713 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.713 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.984 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:29.984 13:36:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:29.984 13:36:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:29.984 13:36:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:29.984 13:36:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.984 13:36:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.984 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:29.985 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:29.986 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.987 13:36:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:29.988 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.989 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.990 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.991 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:29.992 13:36:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:29.992 13:36:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:29.992 13:36:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.992 13:36:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.992 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:29.993 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.257 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.258 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:30.259 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.260 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.261 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:30.262 13:36:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:30.262 13:36:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:30.262 13:36:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:30.262 13:36:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.262 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.263 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.264 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:30.265 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.266 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.267 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.268 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.532 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.533 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.534 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:30.535 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.536 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:30.537 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:30.538 13:36:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:30.539 13:36:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:30.539 13:36:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:30.539 13:36:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:30.539 13:36:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.539 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.540 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:30.541 13:36:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:30.542 13:36:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:30.542 13:36:22 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:30.542 13:36:22 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:31.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:31.674 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.674 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.674 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.674 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.674 13:36:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:31.674 13:36:23 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:31.674 13:36:23 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.674 13:36:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:31.931 ************************************ 00:16:31.931 START TEST nvme_flexible_data_placement 00:16:31.931 ************************************ 00:16:31.931 13:36:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:32.189 Initializing NVMe Controllers 00:16:32.189 Attaching to 0000:00:13.0 00:16:32.189 Controller supports FDP Attached to 0000:00:13.0 00:16:32.189 Namespace ID: 1 Endurance Group ID: 1 00:16:32.189 Initialization complete. 00:16:32.189 00:16:32.189 ================================== 00:16:32.189 == FDP tests for Namespace: #01 == 00:16:32.189 ================================== 00:16:32.189 00:16:32.189 Get Feature: FDP: 00:16:32.189 ================= 00:16:32.189 Enabled: Yes 00:16:32.189 FDP configuration Index: 0 00:16:32.189 00:16:32.189 FDP configurations log page 00:16:32.189 =========================== 00:16:32.189 Number of FDP configurations: 1 00:16:32.189 Version: 0 00:16:32.189 Size: 112 00:16:32.189 FDP Configuration Descriptor: 0 00:16:32.189 Descriptor Size: 96 00:16:32.189 Reclaim Group Identifier format: 2 00:16:32.189 FDP Volatile Write Cache: Not Present 00:16:32.189 FDP Configuration: Valid 00:16:32.189 Vendor Specific Size: 0 00:16:32.189 Number of Reclaim Groups: 2 00:16:32.189 Number of Recalim Unit Handles: 8 00:16:32.189 Max Placement Identifiers: 128 00:16:32.189 Number of Namespaces Suppprted: 256 00:16:32.189 Reclaim unit Nominal Size: 6000000 bytes 00:16:32.189 Estimated Reclaim Unit Time Limit: Not Reported 00:16:32.189 RUH Desc #000: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #001: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #002: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #003: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #004: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #005: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #006: RUH Type: Initially Isolated 00:16:32.189 RUH Desc #007: RUH Type: Initially Isolated 00:16:32.189 00:16:32.189 FDP reclaim unit handle usage log page 00:16:32.189 ====================================== 00:16:32.189 Number of Reclaim Unit Handles: 8 00:16:32.189 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:32.189 RUH Usage Desc #001: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #002: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #003: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #004: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #005: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #006: RUH Attributes: Unused 00:16:32.189 RUH Usage Desc #007: RUH Attributes: Unused 00:16:32.189 00:16:32.189 FDP statistics log page 00:16:32.189 ======================= 00:16:32.189 Host bytes with metadata written: 706859008 00:16:32.189 Media bytes with metadata written: 707055616 00:16:32.189 Media bytes erased: 0 00:16:32.189 00:16:32.189 FDP Reclaim unit handle status 00:16:32.189 ============================== 00:16:32.189 Number of RUHS descriptors: 2 00:16:32.189 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005de3 00:16:32.189 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:32.189 00:16:32.189 FDP write on placement id: 0 success 00:16:32.189 00:16:32.189 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:32.189 00:16:32.189 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:32.189 00:16:32.189 Get Feature: FDP Events for Placement handle: #0 00:16:32.189 ======================== 00:16:32.189 Number of FDP Events: 6 00:16:32.189 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:32.189 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:32.189 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:32.189 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:32.189 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:32.189 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:32.189 00:16:32.189 FDP events log page 00:16:32.189 =================== 00:16:32.189 Number of FDP events: 1 00:16:32.189 FDP Event #0: 00:16:32.189 Event Type: RU Not Written to Capacity 00:16:32.189 Placement Identifier: Valid 00:16:32.189 NSID: Valid 00:16:32.189 Location: Valid 00:16:32.189 Placement Identifier: 0 00:16:32.189 Event Timestamp: 9 00:16:32.189 Namespace Identifier: 1 00:16:32.189 Reclaim Group Identifier: 0 00:16:32.189 Reclaim Unit Handle Identifier: 0 00:16:32.189 00:16:32.189 FDP test passed 00:16:32.189 00:16:32.189 real 0m0.299s 00:16:32.189 user 0m0.121s 00:16:32.189 sys 0m0.077s 00:16:32.189 13:36:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.189 13:36:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:32.189 ************************************ 00:16:32.189 END TEST nvme_flexible_data_placement 00:16:32.189 ************************************ 00:16:32.189 00:16:32.189 real 0m8.317s 00:16:32.189 user 0m1.613s 00:16:32.189 sys 0m1.694s 00:16:32.189 13:36:24 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.189 ************************************ 00:16:32.189 13:36:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:32.189 END TEST nvme_fdp 00:16:32.189 ************************************ 00:16:32.189 13:36:24 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:32.189 13:36:24 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:32.189 13:36:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:32.189 13:36:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.189 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.190 ************************************ 00:16:32.190 START TEST nvme_rpc 00:16:32.190 ************************************ 00:16:32.190 13:36:24 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:32.190 * Looking for test storage... 00:16:32.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:32.190 13:36:24 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.190 13:36:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.190 13:36:24 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:32.448 13:36:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.448 13:36:24 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.449 13:36:24 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:32.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.449 --rc genhtml_branch_coverage=1 00:16:32.449 --rc genhtml_function_coverage=1 00:16:32.449 --rc genhtml_legend=1 00:16:32.449 --rc geninfo_all_blocks=1 00:16:32.449 --rc geninfo_unexecuted_blocks=1 00:16:32.449 00:16:32.449 ' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:32.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.449 --rc genhtml_branch_coverage=1 00:16:32.449 --rc genhtml_function_coverage=1 00:16:32.449 --rc genhtml_legend=1 00:16:32.449 --rc geninfo_all_blocks=1 00:16:32.449 --rc geninfo_unexecuted_blocks=1 00:16:32.449 00:16:32.449 ' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:32.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.449 --rc genhtml_branch_coverage=1 00:16:32.449 --rc genhtml_function_coverage=1 00:16:32.449 --rc genhtml_legend=1 00:16:32.449 --rc geninfo_all_blocks=1 00:16:32.449 --rc geninfo_unexecuted_blocks=1 00:16:32.449 00:16:32.449 ' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:32.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.449 --rc genhtml_branch_coverage=1 00:16:32.449 --rc genhtml_function_coverage=1 00:16:32.449 --rc genhtml_legend=1 00:16:32.449 --rc geninfo_all_blocks=1 00:16:32.449 --rc geninfo_unexecuted_blocks=1 00:16:32.449 00:16:32.449 ' 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67461 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67461 00:16:32.449 13:36:24 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67461 ']' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.449 13:36:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.449 [2024-11-20 13:36:24.471525] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:16:32.449 [2024-11-20 13:36:24.471696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67461 ] 00:16:32.707 [2024-11-20 13:36:24.662995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.964 [2024-11-20 13:36:24.813931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.964 [2024-11-20 13:36:24.813947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.907 13:36:25 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.907 13:36:25 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:33.907 13:36:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:34.166 Nvme0n1 00:16:34.166 13:36:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:34.166 13:36:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:34.425 request: 00:16:34.425 { 00:16:34.425 "bdev_name": "Nvme0n1", 00:16:34.425 "filename": "non_existing_file", 00:16:34.425 "method": "bdev_nvme_apply_firmware", 00:16:34.425 "req_id": 1 00:16:34.425 } 00:16:34.425 Got JSON-RPC error response 00:16:34.425 response: 00:16:34.425 { 00:16:34.425 "code": -32603, 00:16:34.425 "message": "open file failed." 00:16:34.425 } 00:16:34.425 13:36:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:34.425 13:36:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:34.425 13:36:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:34.684 13:36:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:34.684 13:36:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67461 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67461 ']' 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67461 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67461 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.684 killing process with pid 67461 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67461' 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67461 00:16:34.684 13:36:26 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67461 00:16:36.586 00:16:36.586 real 0m4.501s 00:16:36.586 user 0m8.879s 00:16:36.586 sys 0m0.625s 00:16:36.586 13:36:28 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.586 13:36:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.586 ************************************ 00:16:36.586 END TEST nvme_rpc 00:16:36.586 ************************************ 00:16:36.845 13:36:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:36.845 13:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.845 13:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.845 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:16:36.845 ************************************ 00:16:36.845 START TEST nvme_rpc_timeouts 00:16:36.845 ************************************ 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:36.845 * Looking for test storage... 00:16:36.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.845 13:36:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:36.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.845 --rc genhtml_branch_coverage=1 00:16:36.845 --rc genhtml_function_coverage=1 00:16:36.845 --rc genhtml_legend=1 00:16:36.845 --rc geninfo_all_blocks=1 00:16:36.845 --rc geninfo_unexecuted_blocks=1 00:16:36.845 00:16:36.845 ' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:36.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.845 --rc genhtml_branch_coverage=1 00:16:36.845 --rc genhtml_function_coverage=1 00:16:36.845 --rc genhtml_legend=1 00:16:36.845 --rc geninfo_all_blocks=1 00:16:36.845 --rc geninfo_unexecuted_blocks=1 00:16:36.845 00:16:36.845 ' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:36.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.845 --rc genhtml_branch_coverage=1 00:16:36.845 --rc genhtml_function_coverage=1 00:16:36.845 --rc genhtml_legend=1 00:16:36.845 --rc geninfo_all_blocks=1 00:16:36.845 --rc geninfo_unexecuted_blocks=1 00:16:36.845 00:16:36.845 ' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:36.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.845 --rc genhtml_branch_coverage=1 00:16:36.845 --rc genhtml_function_coverage=1 00:16:36.845 --rc genhtml_legend=1 00:16:36.845 --rc geninfo_all_blocks=1 00:16:36.845 --rc geninfo_unexecuted_blocks=1 00:16:36.845 00:16:36.845 ' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67533 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67533 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67571 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:36.845 13:36:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67571 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67571 ']' 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.845 13:36:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:37.104 [2024-11-20 13:36:28.989674] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:16:37.104 [2024-11-20 13:36:28.989856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67571 ] 00:16:37.361 [2024-11-20 13:36:29.172191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:37.361 [2024-11-20 13:36:29.284206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.361 [2024-11-20 13:36:29.284217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.296 13:36:30 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.296 13:36:30 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:16:38.296 Checking default timeout settings: 00:16:38.296 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:38.296 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:38.554 Making settings changes with rpc: 00:16:38.554 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:38.554 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:38.813 Check default vs. modified settings: 00:16:38.813 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:38.813 13:36:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:39.400 Setting action_on_timeout is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:39.400 Setting timeout_us is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:39.400 Setting timeout_admin_us is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67533 /tmp/settings_modified_67533 00:16:39.400 13:36:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67571 00:16:39.400 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67571 ']' 00:16:39.400 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67571 00:16:39.400 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67571 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.401 killing process with pid 67571 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67571' 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67571 00:16:39.401 13:36:31 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67571 00:16:41.322 RPC TIMEOUT SETTING TEST PASSED. 00:16:41.322 13:36:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:41.322 00:16:41.322 real 0m4.621s 00:16:41.322 user 0m9.104s 00:16:41.322 sys 0m0.629s 00:16:41.322 13:36:33 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.322 ************************************ 00:16:41.322 END TEST nvme_rpc_timeouts 00:16:41.322 ************************************ 00:16:41.322 13:36:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:41.322 13:36:33 -- spdk/autotest.sh@239 -- # uname -s 00:16:41.322 13:36:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:16:41.322 13:36:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:41.322 13:36:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.322 13:36:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.322 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:16:41.322 ************************************ 00:16:41.322 START TEST sw_hotplug 00:16:41.322 ************************************ 00:16:41.322 13:36:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:41.587 * Looking for test storage... 00:16:41.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.587 13:36:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.587 --rc genhtml_branch_coverage=1 00:16:41.587 --rc genhtml_function_coverage=1 00:16:41.587 --rc genhtml_legend=1 00:16:41.587 --rc geninfo_all_blocks=1 00:16:41.587 --rc geninfo_unexecuted_blocks=1 00:16:41.587 00:16:41.587 ' 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.587 --rc genhtml_branch_coverage=1 00:16:41.587 --rc genhtml_function_coverage=1 00:16:41.587 --rc genhtml_legend=1 00:16:41.587 --rc geninfo_all_blocks=1 00:16:41.587 --rc geninfo_unexecuted_blocks=1 00:16:41.587 00:16:41.587 ' 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.587 --rc genhtml_branch_coverage=1 00:16:41.587 --rc genhtml_function_coverage=1 00:16:41.587 --rc genhtml_legend=1 00:16:41.587 --rc geninfo_all_blocks=1 00:16:41.587 --rc geninfo_unexecuted_blocks=1 00:16:41.587 00:16:41.587 ' 00:16:41.587 13:36:33 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.587 --rc genhtml_branch_coverage=1 00:16:41.587 --rc genhtml_function_coverage=1 00:16:41.587 --rc genhtml_legend=1 00:16:41.587 --rc geninfo_all_blocks=1 00:16:41.587 --rc geninfo_unexecuted_blocks=1 00:16:41.587 00:16:41.587 ' 00:16:41.587 13:36:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:41.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.125 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.125 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.125 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.125 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:42.125 13:36:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:16:42.125 13:36:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:16:42.125 13:36:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:16:42.125 13:36:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:16:42.125 13:36:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:16:42.125 13:36:34 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:42.125 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:16:42.125 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:16:42.125 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.671 Waiting for block devices as requested 00:16:42.671 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.933 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.933 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.203 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:48.203 13:36:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:16:48.203 13:36:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:48.462 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:16:48.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.462 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:16:48.720 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:16:48.978 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.978 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.978 13:36:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:16:48.978 13:36:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68443 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:16:49.236 13:36:41 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:49.236 13:36:41 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:49.236 13:36:41 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:49.236 13:36:41 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:49.236 13:36:41 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:49.236 13:36:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:49.494 Initializing NVMe Controllers 00:16:49.494 Attaching to 0000:00:10.0 00:16:49.494 Attaching to 0000:00:11.0 00:16:49.494 Attached to 0000:00:10.0 00:16:49.494 Attached to 0000:00:11.0 00:16:49.494 Initialization complete. Starting I/O... 00:16:49.494 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:16:49.494 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:16:49.494 00:16:50.427 QEMU NVMe Ctrl (12340 ): 1052 I/Os completed (+1052) 00:16:50.427 QEMU NVMe Ctrl (12341 ): 1228 I/Os completed (+1228) 00:16:50.427 00:16:51.371 QEMU NVMe Ctrl (12340 ): 2370 I/Os completed (+1318) 00:16:51.371 QEMU NVMe Ctrl (12341 ): 2660 I/Os completed (+1432) 00:16:51.371 00:16:52.337 QEMU NVMe Ctrl (12340 ): 3974 I/Os completed (+1604) 00:16:52.337 QEMU NVMe Ctrl (12341 ): 4549 I/Os completed (+1889) 00:16:52.337 00:16:53.715 QEMU NVMe Ctrl (12340 ): 5617 I/Os completed (+1643) 00:16:53.715 QEMU NVMe Ctrl (12341 ): 6297 I/Os completed (+1748) 00:16:53.715 00:16:54.285 QEMU NVMe Ctrl (12340 ): 7134 I/Os completed (+1517) 00:16:54.285 QEMU NVMe Ctrl (12341 ): 8512 I/Os completed (+2215) 00:16:54.285 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:55.219 [2024-11-20 13:36:47.068328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:55.219 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:55.219 [2024-11-20 13:36:47.070922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.071007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.071045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.071077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:55.219 [2024-11-20 13:36:47.074593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.074668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.074697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.074724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:55.219 [2024-11-20 13:36:47.095532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:55.219 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:55.219 [2024-11-20 13:36:47.097676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.097752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.097790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.097827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:55.219 [2024-11-20 13:36:47.101044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.101123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.101157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 [2024-11-20 13:36:47.101187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:55.219 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:55.219 EAL: Scan for (pci) bus failed. 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:55.219 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:55.478 Attaching to 0000:00:10.0 00:16:55.478 Attached to 0000:00:10.0 00:16:55.478 QEMU NVMe Ctrl (12340 ): 15 I/Os completed (+15) 00:16:55.478 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:55.478 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:55.479 13:36:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:55.479 Attaching to 0000:00:11.0 00:16:55.479 Attached to 0000:00:11.0 00:16:56.488 QEMU NVMe Ctrl (12340 ): 1516 I/Os completed (+1501) 00:16:56.488 QEMU NVMe Ctrl (12341 ): 1573 I/Os completed (+1573) 00:16:56.488 00:16:57.422 QEMU NVMe Ctrl (12340 ): 3010 I/Os completed (+1494) 00:16:57.422 QEMU NVMe Ctrl (12341 ): 3300 I/Os completed (+1727) 00:16:57.422 00:16:58.357 QEMU NVMe Ctrl (12340 ): 4717 I/Os completed (+1707) 00:16:58.357 QEMU NVMe Ctrl (12341 ): 5224 I/Os completed (+1924) 00:16:58.357 00:16:59.294 QEMU NVMe Ctrl (12340 ): 6368 I/Os completed (+1651) 00:16:59.294 QEMU NVMe Ctrl (12341 ): 7082 I/Os completed (+1858) 00:16:59.294 00:17:00.671 QEMU NVMe Ctrl (12340 ): 7935 I/Os completed (+1567) 00:17:00.671 QEMU NVMe Ctrl (12341 ): 8864 I/Os completed (+1782) 00:17:00.671 00:17:01.297 QEMU NVMe Ctrl (12340 ): 9433 I/Os completed (+1498) 00:17:01.297 QEMU NVMe Ctrl (12341 ): 10841 I/Os completed (+1977) 00:17:01.297 00:17:02.688 QEMU NVMe Ctrl (12340 ): 11023 I/Os completed (+1590) 00:17:02.688 QEMU NVMe Ctrl (12341 ): 12631 I/Os completed (+1790) 00:17:02.688 00:17:03.622 QEMU NVMe Ctrl (12340 ): 12605 I/Os completed (+1582) 00:17:03.622 QEMU NVMe Ctrl (12341 ): 14360 I/Os completed (+1729) 00:17:03.622 00:17:04.554 QEMU NVMe Ctrl (12340 ): 14125 I/Os completed (+1520) 00:17:04.554 QEMU NVMe Ctrl (12341 ): 16280 I/Os completed (+1920) 00:17:04.554 00:17:05.488 QEMU NVMe Ctrl (12340 ): 15784 I/Os completed (+1659) 00:17:05.488 QEMU NVMe Ctrl (12341 ): 18172 I/Os completed (+1892) 00:17:05.488 00:17:06.422 QEMU NVMe Ctrl (12340 ): 17187 I/Os completed (+1403) 00:17:06.422 QEMU NVMe Ctrl (12341 ): 19966 I/Os completed (+1794) 00:17:06.422 00:17:07.354 QEMU NVMe Ctrl (12340 ): 18983 I/Os completed (+1796) 00:17:07.355 QEMU NVMe Ctrl (12341 ): 22014 I/Os completed (+2048) 00:17:07.355 00:17:07.355 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:07.355 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:07.355 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:07.355 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:07.355 [2024-11-20 13:36:59.389759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:07.355 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:07.355 [2024-11-20 13:36:59.392032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.355 [2024-11-20 13:36:59.392129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.355 [2024-11-20 13:36:59.392185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.355 [2024-11-20 13:36:59.392237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.613 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:07.613 [2024-11-20 13:36:59.395241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.613 [2024-11-20 13:36:59.395324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.613 [2024-11-20 13:36:59.395371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.613 [2024-11-20 13:36:59.395415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.613 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:07.613 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:07.613 [2024-11-20 13:36:59.421120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:07.613 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:07.613 [2024-11-20 13:36:59.422932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.422990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.423024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.423048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:07.614 [2024-11-20 13:36:59.425578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.425629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.425655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 [2024-11-20 13:36:59.425678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:07.614 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:07.614 EAL: Scan for (pci) bus failed. 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:07.614 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:07.872 Attaching to 0000:00:10.0 00:17:07.872 Attached to 0000:00:10.0 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.872 13:36:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:07.872 Attaching to 0000:00:11.0 00:17:07.872 Attached to 0000:00:11.0 00:17:08.437 QEMU NVMe Ctrl (12340 ): 1142 I/Os completed (+1142) 00:17:08.437 QEMU NVMe Ctrl (12341 ): 1162 I/Os completed (+1162) 00:17:08.437 00:17:09.369 QEMU NVMe Ctrl (12340 ): 2777 I/Os completed (+1635) 00:17:09.369 QEMU NVMe Ctrl (12341 ): 3041 I/Os completed (+1879) 00:17:09.369 00:17:10.305 QEMU NVMe Ctrl (12340 ): 4376 I/Os completed (+1599) 00:17:10.305 QEMU NVMe Ctrl (12341 ): 4837 I/Os completed (+1796) 00:17:10.305 00:17:11.682 QEMU NVMe Ctrl (12340 ): 5980 I/Os completed (+1604) 00:17:11.682 QEMU NVMe Ctrl (12341 ): 6726 I/Os completed (+1889) 00:17:11.682 00:17:12.312 QEMU NVMe Ctrl (12340 ): 7635 I/Os completed (+1655) 00:17:12.312 QEMU NVMe Ctrl (12341 ): 8476 I/Os completed (+1750) 00:17:12.312 00:17:13.687 QEMU NVMe Ctrl (12340 ): 9119 I/Os completed (+1484) 00:17:13.687 QEMU NVMe Ctrl (12341 ): 10232 I/Os completed (+1756) 00:17:13.687 00:17:14.620 QEMU NVMe Ctrl (12340 ): 10737 I/Os completed (+1618) 00:17:14.620 QEMU NVMe Ctrl (12341 ): 12266 I/Os completed (+2034) 00:17:14.620 00:17:15.556 QEMU NVMe Ctrl (12340 ): 12250 I/Os completed (+1513) 00:17:15.556 QEMU NVMe Ctrl (12341 ): 14066 I/Os completed (+1800) 00:17:15.556 00:17:16.492 QEMU NVMe Ctrl (12340 ): 13702 I/Os completed (+1452) 00:17:16.492 QEMU NVMe Ctrl (12341 ): 15914 I/Os completed (+1848) 00:17:16.492 00:17:17.448 QEMU NVMe Ctrl (12340 ): 15161 I/Os completed (+1459) 00:17:17.448 QEMU NVMe Ctrl (12341 ): 17683 I/Os completed (+1769) 00:17:17.448 00:17:18.387 QEMU NVMe Ctrl (12340 ): 16894 I/Os completed (+1733) 00:17:18.387 QEMU NVMe Ctrl (12341 ): 19578 I/Os completed (+1895) 00:17:18.387 00:17:19.322 QEMU NVMe Ctrl (12340 ): 18473 I/Os completed (+1579) 00:17:19.322 QEMU NVMe Ctrl (12341 ): 21464 I/Os completed (+1886) 00:17:19.322 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:19.889 [2024-11-20 13:37:11.773202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:19.889 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:19.889 [2024-11-20 13:37:11.775193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.775262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.775291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.775316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:19.889 [2024-11-20 13:37:11.778258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.778319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.778350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.778372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:19.889 [2024-11-20 13:37:11.796851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:19.889 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:19.889 [2024-11-20 13:37:11.799549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.799641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.799685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.799719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:19.889 [2024-11-20 13:37:11.803450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.803525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.803566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 [2024-11-20 13:37:11.803596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:19.889 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:20.148 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:20.148 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:20.148 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:20.148 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:20.148 13:37:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:20.148 Attaching to 0000:00:10.0 00:17:20.148 Attached to 0000:00:10.0 00:17:20.148 13:37:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:20.148 13:37:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:20.148 13:37:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:20.148 Attaching to 0000:00:11.0 00:17:20.148 Attached to 0000:00:11.0 00:17:20.148 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:20.148 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:20.148 [2024-11-20 13:37:12.102737] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:32.357 13:37:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:32.357 13:37:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:32.357 13:37:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.03 00:17:32.357 13:37:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.03 00:17:32.357 13:37:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:32.357 13:37:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 00:17:32.357 13:37:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 00:17:32.357 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 13:37:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68443 00:17:38.920 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68443) - No such process 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68443 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68984 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68984 00:17:38.920 13:37:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68984 ']' 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.920 13:37:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.920 [2024-11-20 13:37:30.246836] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:17:38.920 [2024-11-20 13:37:30.247020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68984 ] 00:17:38.920 [2024-11-20 13:37:30.431318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.920 [2024-11-20 13:37:30.560219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:39.486 13:37:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:39.486 13:37:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:46.046 13:37:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.046 13:37:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 13:37:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.046 [2024-11-20 13:37:37.459947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:46.046 [2024-11-20 13:37:37.462781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.046 [2024-11-20 13:37:37.462836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.046 [2024-11-20 13:37:37.462861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.046 [2024-11-20 13:37:37.462912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.046 [2024-11-20 13:37:37.462929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.046 [2024-11-20 13:37:37.462946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.046 [2024-11-20 13:37:37.462961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.046 [2024-11-20 13:37:37.462977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.046 [2024-11-20 13:37:37.462991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.046 [2024-11-20 13:37:37.463013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.046 [2024-11-20 13:37:37.463027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.046 [2024-11-20 13:37:37.463044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:46.046 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:46.046 [2024-11-20 13:37:37.859958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:46.046 [2024-11-20 13:37:37.862855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.047 [2024-11-20 13:37:37.862916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.047 [2024-11-20 13:37:37.862942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.047 [2024-11-20 13:37:37.862970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.047 [2024-11-20 13:37:37.862987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.047 [2024-11-20 13:37:37.863002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.047 [2024-11-20 13:37:37.863020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.047 [2024-11-20 13:37:37.863034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.047 [2024-11-20 13:37:37.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.047 [2024-11-20 13:37:37.863065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.047 [2024-11-20 13:37:37.863081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.047 [2024-11-20 13:37:37.863094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:46.047 13:37:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:46.047 13:37:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.047 13:37:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:46.047 13:37:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.047 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:46.047 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:46.305 13:37:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:58.567 13:37:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.567 [2024-11-20 13:37:50.460200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:58.567 [2024-11-20 13:37:50.463099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:58.567 [2024-11-20 13:37:50.463155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.567 [2024-11-20 13:37:50.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.567 [2024-11-20 13:37:50.463210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:58.567 [2024-11-20 13:37:50.463226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.567 [2024-11-20 13:37:50.463243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.567 [2024-11-20 13:37:50.463258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:58.567 [2024-11-20 13:37:50.463274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.567 [2024-11-20 13:37:50.463288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.567 [2024-11-20 13:37:50.463305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:58.567 [2024-11-20 13:37:50.463318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.567 [2024-11-20 13:37:50.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:58.567 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:58.825 [2024-11-20 13:37:50.860204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:59.083 [2024-11-20 13:37:50.862980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.083 [2024-11-20 13:37:50.863028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.083 [2024-11-20 13:37:50.863056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.083 [2024-11-20 13:37:50.863083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.083 [2024-11-20 13:37:50.863102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.083 [2024-11-20 13:37:50.863117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.083 [2024-11-20 13:37:50.863134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.083 [2024-11-20 13:37:50.863148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.083 [2024-11-20 13:37:50.863164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.083 [2024-11-20 13:37:50.863179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.083 [2024-11-20 13:37:50.863194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.083 [2024-11-20 13:37:50.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:59.083 13:37:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:59.083 13:37:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.083 13:37:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:59.083 13:37:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.083 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:59.083 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:59.083 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:59.083 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:59.083 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:59.342 13:37:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:11.543 [2024-11-20 13:38:03.460494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:11.543 [2024-11-20 13:38:03.463396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:11.543 [2024-11-20 13:38:03.463457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.543 [2024-11-20 13:38:03.463481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.543 [2024-11-20 13:38:03.463512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:11.543 [2024-11-20 13:38:03.463528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.543 [2024-11-20 13:38:03.463548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.543 [2024-11-20 13:38:03.463563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:11.543 [2024-11-20 13:38:03.463580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.543 [2024-11-20 13:38:03.463594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.543 [2024-11-20 13:38:03.463610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:11.543 [2024-11-20 13:38:03.463624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.543 [2024-11-20 13:38:03.463640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.543 13:38:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:11.543 13:38:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:12.111 [2024-11-20 13:38:03.960522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:12.111 [2024-11-20 13:38:03.963486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.111 [2024-11-20 13:38:03.963538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.111 [2024-11-20 13:38:03.963565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.111 [2024-11-20 13:38:03.963593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.111 [2024-11-20 13:38:03.963611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.111 [2024-11-20 13:38:03.963626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.111 [2024-11-20 13:38:03.963643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.111 [2024-11-20 13:38:03.963657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.111 [2024-11-20 13:38:03.963676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.111 [2024-11-20 13:38:03.963691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.111 [2024-11-20 13:38:03.963706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.111 [2024-11-20 13:38:03.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:12.111 13:38:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.111 13:38:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:12.111 13:38:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:12.111 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:12.370 13:38:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:24.578 13:38:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.578 13:38:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:24.578 13:38:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.578 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.05 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.05 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:18:24.579 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:24.579 13:38:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:24.579 13:38:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:31.139 13:38:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.139 13:38:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:31.139 [2024-11-20 13:38:22.544543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:31.139 [2024-11-20 13:38:22.547128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.547184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.547207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.547257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.547282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.547299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.547316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.547332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.547346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.547363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.547376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.547404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 13:38:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:31.139 13:38:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:31.139 [2024-11-20 13:38:22.944515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:31.139 [2024-11-20 13:38:22.948793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.948881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.948959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.948986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.949008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.949034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.949056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.949080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 [2024-11-20 13:38:22.949104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.139 [2024-11-20 13:38:22.949129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.139 [2024-11-20 13:38:22.949151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:31.139 13:38:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.139 13:38:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:31.139 13:38:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:31.139 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:31.398 13:38:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:43.604 [2024-11-20 13:38:35.545140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:43.604 [2024-11-20 13:38:35.548273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:43.604 [2024-11-20 13:38:35.548339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.604 [2024-11-20 13:38:35.548365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.604 [2024-11-20 13:38:35.548401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:43.604 [2024-11-20 13:38:35.548417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.604 [2024-11-20 13:38:35.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.604 [2024-11-20 13:38:35.548452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:43.604 [2024-11-20 13:38:35.548468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.604 [2024-11-20 13:38:35.548482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.604 [2024-11-20 13:38:35.548498] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:43.604 [2024-11-20 13:38:35.548511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.604 [2024-11-20 13:38:35.548527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.604 13:38:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:43.604 13:38:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:44.171 [2024-11-20 13:38:36.045155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:44.171 [2024-11-20 13:38:36.048079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.171 [2024-11-20 13:38:36.048132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.171 [2024-11-20 13:38:36.048160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.171 [2024-11-20 13:38:36.048188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.171 [2024-11-20 13:38:36.048209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.171 [2024-11-20 13:38:36.048224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.171 [2024-11-20 13:38:36.048241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.171 [2024-11-20 13:38:36.048255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.171 [2024-11-20 13:38:36.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.172 [2024-11-20 13:38:36.048289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.172 [2024-11-20 13:38:36.048305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.172 [2024-11-20 13:38:36.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:44.172 13:38:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.172 13:38:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:44.172 13:38:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:44.172 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:44.429 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:44.687 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:44.687 13:38:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:56.898 [2024-11-20 13:38:48.545338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:56.898 [2024-11-20 13:38:48.547641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.898 [2024-11-20 13:38:48.547699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.898 [2024-11-20 13:38:48.547721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.898 [2024-11-20 13:38:48.547751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.898 [2024-11-20 13:38:48.547767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.898 [2024-11-20 13:38:48.547783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.898 [2024-11-20 13:38:48.547799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.898 [2024-11-20 13:38:48.547818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.898 [2024-11-20 13:38:48.547832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.898 [2024-11-20 13:38:48.547848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.898 [2024-11-20 13:38:48.547862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.898 [2024-11-20 13:38:48.547912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:56.898 13:38:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:56.898 13:38:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:57.157 [2024-11-20 13:38:48.945355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:57.157 [2024-11-20 13:38:48.948261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:57.157 [2024-11-20 13:38:48.948310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.157 [2024-11-20 13:38:48.948336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.157 [2024-11-20 13:38:48.948363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:57.157 [2024-11-20 13:38:48.948382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.157 [2024-11-20 13:38:48.948397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.157 [2024-11-20 13:38:48.948414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:57.157 [2024-11-20 13:38:48.948428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.157 [2024-11-20 13:38:48.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.157 [2024-11-20 13:38:48.948463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:57.157 [2024-11-20 13:38:48.948482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.157 [2024-11-20 13:38:48.948496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:57.157 13:38:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.157 13:38:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:57.157 13:38:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:57.157 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:57.415 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:57.674 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:57.674 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:57.674 13:38:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.12 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.12 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.12 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.12 2 00:19:09.904 remove_attach_helper took 45.12s to complete (handling 2 nvme drive(s)) 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:19:09.904 13:39:01 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68984 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68984 ']' 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68984 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68984 00:19:09.904 killing process with pid 68984 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68984' 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68984 00:19:09.904 13:39:01 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68984 00:19:12.434 13:39:03 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:12.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.692 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:12.692 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:12.950 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:12.950 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:12.950 ************************************ 00:19:12.950 END TEST sw_hotplug 00:19:12.950 ************************************ 00:19:12.950 00:19:12.950 real 2m31.529s 00:19:12.950 user 1m51.060s 00:19:12.950 sys 0m20.189s 00:19:12.950 13:39:04 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.950 13:39:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:12.950 13:39:04 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:19:12.950 13:39:04 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:12.950 13:39:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.950 13:39:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.950 13:39:04 -- common/autotest_common.sh@10 -- # set +x 00:19:12.950 ************************************ 00:19:12.950 START TEST nvme_xnvme 00:19:12.950 ************************************ 00:19:12.950 13:39:04 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:12.950 * Looking for test storage... 00:19:12.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:12.950 13:39:04 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.950 13:39:04 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.950 13:39:04 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.212 13:39:05 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.212 --rc genhtml_branch_coverage=1 00:19:13.212 --rc genhtml_function_coverage=1 00:19:13.212 --rc genhtml_legend=1 00:19:13.212 --rc geninfo_all_blocks=1 00:19:13.212 --rc geninfo_unexecuted_blocks=1 00:19:13.212 00:19:13.212 ' 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.212 --rc genhtml_branch_coverage=1 00:19:13.212 --rc genhtml_function_coverage=1 00:19:13.212 --rc genhtml_legend=1 00:19:13.212 --rc geninfo_all_blocks=1 00:19:13.212 --rc geninfo_unexecuted_blocks=1 00:19:13.212 00:19:13.212 ' 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.212 --rc genhtml_branch_coverage=1 00:19:13.212 --rc genhtml_function_coverage=1 00:19:13.212 --rc genhtml_legend=1 00:19:13.212 --rc geninfo_all_blocks=1 00:19:13.212 --rc geninfo_unexecuted_blocks=1 00:19:13.212 00:19:13.212 ' 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.212 --rc genhtml_branch_coverage=1 00:19:13.212 --rc genhtml_function_coverage=1 00:19:13.212 --rc genhtml_legend=1 00:19:13.212 --rc geninfo_all_blocks=1 00:19:13.212 --rc geninfo_unexecuted_blocks=1 00:19:13.212 00:19:13.212 ' 00:19:13.212 13:39:05 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:19:13.212 13:39:05 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:13.212 13:39:05 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:13.213 13:39:05 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:19:13.213 13:39:05 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:13.213 13:39:05 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:13.213 13:39:05 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:19:13.213 13:39:05 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:19:13.213 13:39:05 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:13.213 13:39:05 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:13.213 #define SPDK_CONFIG_H 00:19:13.213 #define SPDK_CONFIG_AIO_FSDEV 1 00:19:13.213 #define SPDK_CONFIG_APPS 1 00:19:13.213 #define SPDK_CONFIG_ARCH native 00:19:13.213 #define SPDK_CONFIG_ASAN 1 00:19:13.213 #undef SPDK_CONFIG_AVAHI 00:19:13.213 #undef SPDK_CONFIG_CET 00:19:13.213 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:19:13.213 #define SPDK_CONFIG_COVERAGE 1 00:19:13.213 #define SPDK_CONFIG_CROSS_PREFIX 00:19:13.213 #undef SPDK_CONFIG_CRYPTO 00:19:13.213 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:13.213 #undef SPDK_CONFIG_CUSTOMOCF 00:19:13.213 #undef SPDK_CONFIG_DAOS 00:19:13.213 #define SPDK_CONFIG_DAOS_DIR 00:19:13.213 #define SPDK_CONFIG_DEBUG 1 00:19:13.213 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:13.213 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:13.213 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:13.213 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:13.213 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:13.213 #undef SPDK_CONFIG_DPDK_UADK 00:19:13.213 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:13.213 #define SPDK_CONFIG_EXAMPLES 1 00:19:13.213 #undef SPDK_CONFIG_FC 00:19:13.213 #define SPDK_CONFIG_FC_PATH 00:19:13.213 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:13.213 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:13.213 #define SPDK_CONFIG_FSDEV 1 00:19:13.213 #undef SPDK_CONFIG_FUSE 00:19:13.213 #undef SPDK_CONFIG_FUZZER 00:19:13.213 #define SPDK_CONFIG_FUZZER_LIB 00:19:13.213 #undef SPDK_CONFIG_GOLANG 00:19:13.213 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:13.213 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:13.213 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:13.213 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:13.214 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:13.214 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:13.214 #undef SPDK_CONFIG_HAVE_LZ4 00:19:13.214 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:19:13.214 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:19:13.214 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:13.214 #define SPDK_CONFIG_IDXD 1 00:19:13.214 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:13.214 #undef SPDK_CONFIG_IPSEC_MB 00:19:13.214 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:13.214 #define SPDK_CONFIG_ISAL 1 00:19:13.214 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:13.214 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:13.214 #define SPDK_CONFIG_LIBDIR 00:19:13.214 #undef SPDK_CONFIG_LTO 00:19:13.214 #define SPDK_CONFIG_MAX_LCORES 128 00:19:13.214 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:19:13.214 #define SPDK_CONFIG_NVME_CUSE 1 00:19:13.214 #undef SPDK_CONFIG_OCF 00:19:13.214 #define SPDK_CONFIG_OCF_PATH 00:19:13.214 #define SPDK_CONFIG_OPENSSL_PATH 00:19:13.214 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:13.214 #define SPDK_CONFIG_PGO_DIR 00:19:13.214 #undef SPDK_CONFIG_PGO_USE 00:19:13.214 #define SPDK_CONFIG_PREFIX /usr/local 00:19:13.214 #undef SPDK_CONFIG_RAID5F 00:19:13.214 #undef SPDK_CONFIG_RBD 00:19:13.214 #define SPDK_CONFIG_RDMA 1 00:19:13.214 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:13.214 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:13.214 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:13.214 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:13.214 #define SPDK_CONFIG_SHARED 1 00:19:13.214 #undef SPDK_CONFIG_SMA 00:19:13.214 #define SPDK_CONFIG_TESTS 1 00:19:13.214 #undef SPDK_CONFIG_TSAN 00:19:13.214 #define SPDK_CONFIG_UBLK 1 00:19:13.214 #define SPDK_CONFIG_UBSAN 1 00:19:13.214 #undef SPDK_CONFIG_UNIT_TESTS 00:19:13.214 #undef SPDK_CONFIG_URING 00:19:13.214 #define SPDK_CONFIG_URING_PATH 00:19:13.214 #undef SPDK_CONFIG_URING_ZNS 00:19:13.214 #undef SPDK_CONFIG_USDT 00:19:13.214 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:13.214 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:13.214 #undef SPDK_CONFIG_VFIO_USER 00:19:13.214 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:13.214 #define SPDK_CONFIG_VHOST 1 00:19:13.214 #define SPDK_CONFIG_VIRTIO 1 00:19:13.214 #undef SPDK_CONFIG_VTUNE 00:19:13.214 #define SPDK_CONFIG_VTUNE_DIR 00:19:13.214 #define SPDK_CONFIG_WERROR 1 00:19:13.214 #define SPDK_CONFIG_WPDK_DIR 00:19:13.214 #define SPDK_CONFIG_XNVME 1 00:19:13.214 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:13.214 13:39:05 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.214 13:39:05 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.214 13:39:05 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.214 13:39:05 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.214 13:39:05 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.214 13:39:05 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.214 13:39:05 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.214 13:39:05 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.214 13:39:05 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:13.214 13:39:05 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@68 -- # uname -s 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:19:13.214 13:39:05 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:19:13.214 13:39:05 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:19:13.215 13:39:05 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70311 ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70311 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.DYmeZk 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.DYmeZk/tests/xnvme /tmp/spdk.DYmeZk 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976092672 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591826432 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976092672 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591826432 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94571601920 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5131177984 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:19:13.216 * Looking for test storage... 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976092672 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:13.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:13.216 13:39:05 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:19:13.217 13:39:05 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.217 13:39:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.217 13:39:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.474 13:39:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.474 13:39:05 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:19:13.474 13:39:05 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.474 13:39:05 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.474 --rc genhtml_branch_coverage=1 00:19:13.474 --rc genhtml_function_coverage=1 00:19:13.474 --rc genhtml_legend=1 00:19:13.474 --rc geninfo_all_blocks=1 00:19:13.474 --rc geninfo_unexecuted_blocks=1 00:19:13.474 00:19:13.474 ' 00:19:13.474 13:39:05 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.474 --rc genhtml_branch_coverage=1 00:19:13.474 --rc genhtml_function_coverage=1 00:19:13.474 --rc genhtml_legend=1 00:19:13.474 --rc geninfo_all_blocks=1 00:19:13.474 --rc geninfo_unexecuted_blocks=1 00:19:13.475 00:19:13.475 ' 00:19:13.475 13:39:05 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.475 --rc genhtml_branch_coverage=1 00:19:13.475 --rc genhtml_function_coverage=1 00:19:13.475 --rc genhtml_legend=1 00:19:13.475 --rc geninfo_all_blocks=1 00:19:13.475 --rc geninfo_unexecuted_blocks=1 00:19:13.475 00:19:13.475 ' 00:19:13.475 13:39:05 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.475 --rc genhtml_branch_coverage=1 00:19:13.475 --rc genhtml_function_coverage=1 00:19:13.475 --rc genhtml_legend=1 00:19:13.475 --rc geninfo_all_blocks=1 00:19:13.475 --rc geninfo_unexecuted_blocks=1 00:19:13.475 00:19:13.475 ' 00:19:13.475 13:39:05 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.475 13:39:05 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.475 13:39:05 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.475 13:39:05 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.475 13:39:05 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.475 13:39:05 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.475 13:39:05 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.475 13:39:05 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.475 13:39:05 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:13.475 13:39:05 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:19:13.475 13:39:05 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:13.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.732 Waiting for block devices as requested 00:19:13.989 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.989 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.989 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:14.247 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:19.523 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:19.523 13:39:11 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:19:19.782 13:39:11 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:19:19.782 13:39:11 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:19:19.782 13:39:11 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:19:19.782 13:39:11 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:19:19.782 13:39:11 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:19:19.782 13:39:11 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:19.782 13:39:11 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:20.040 No valid GPT data, bailing 00:19:20.040 13:39:11 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:20.040 13:39:11 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:19:20.040 13:39:11 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:20.040 13:39:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:20.040 13:39:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.040 13:39:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.040 13:39:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.040 ************************************ 00:19:20.040 START TEST xnvme_rpc 00:19:20.040 ************************************ 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70704 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70704 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70704 ']' 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.040 13:39:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.040 [2024-11-20 13:39:12.073925] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:19:20.040 [2024-11-20 13:39:12.074166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70704 ] 00:19:20.298 [2024-11-20 13:39:12.272037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.555 [2024-11-20 13:39:12.454433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.488 xnvme_bdev 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.488 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.489 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70704 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70704 ']' 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70704 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70704 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.746 killing process with pid 70704 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70704' 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70704 00:19:21.746 13:39:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70704 00:19:24.273 00:19:24.273 real 0m3.896s 00:19:24.273 user 0m4.175s 00:19:24.273 sys 0m0.503s 00:19:24.273 13:39:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.273 13:39:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.273 ************************************ 00:19:24.273 END TEST xnvme_rpc 00:19:24.273 ************************************ 00:19:24.273 13:39:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:24.273 13:39:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.273 13:39:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.273 13:39:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.273 ************************************ 00:19:24.273 START TEST xnvme_bdevperf 00:19:24.273 ************************************ 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:24.273 13:39:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:24.273 { 00:19:24.273 "subsystems": [ 00:19:24.273 { 00:19:24.273 "subsystem": "bdev", 00:19:24.273 "config": [ 00:19:24.273 { 00:19:24.273 "params": { 00:19:24.273 "io_mechanism": "libaio", 00:19:24.273 "conserve_cpu": false, 00:19:24.273 "filename": "/dev/nvme0n1", 00:19:24.273 "name": "xnvme_bdev" 00:19:24.273 }, 00:19:24.273 "method": "bdev_xnvme_create" 00:19:24.273 }, 00:19:24.273 { 00:19:24.273 "method": "bdev_wait_for_examine" 00:19:24.273 } 00:19:24.273 ] 00:19:24.273 } 00:19:24.273 ] 00:19:24.273 } 00:19:24.273 [2024-11-20 13:39:15.943984] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:19:24.273 [2024-11-20 13:39:15.944206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70789 ] 00:19:24.273 [2024-11-20 13:39:16.136711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.273 [2024-11-20 13:39:16.239790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.532 Running I/O for 5 seconds... 00:19:26.838 21775.00 IOPS, 85.06 MiB/s [2024-11-20T13:39:19.812Z] 21010.50 IOPS, 82.07 MiB/s [2024-11-20T13:39:20.746Z] 21637.67 IOPS, 84.52 MiB/s [2024-11-20T13:39:21.679Z] 22159.75 IOPS, 86.56 MiB/s 00:19:29.640 Latency(us) 00:19:29.640 [2024-11-20T13:39:21.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.640 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:29.640 xnvme_bdev : 5.01 21333.43 83.33 0.00 0.00 2991.43 431.94 10783.65 00:19:29.640 [2024-11-20T13:39:21.679Z] =================================================================================================================== 00:19:29.640 [2024-11-20T13:39:21.679Z] Total : 21333.43 83.33 0.00 0.00 2991.43 431.94 10783.65 00:19:31.015 13:39:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:31.015 13:39:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:31.015 13:39:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:31.015 13:39:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:31.015 13:39:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:31.015 { 00:19:31.015 "subsystems": [ 00:19:31.015 { 00:19:31.015 "subsystem": "bdev", 00:19:31.015 "config": [ 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "io_mechanism": "libaio", 00:19:31.015 "conserve_cpu": false, 00:19:31.015 "filename": "/dev/nvme0n1", 00:19:31.015 "name": "xnvme_bdev" 00:19:31.015 }, 00:19:31.015 "method": "bdev_xnvme_create" 00:19:31.015 }, 00:19:31.015 { 00:19:31.015 "method": "bdev_wait_for_examine" 00:19:31.015 } 00:19:31.015 ] 00:19:31.015 } 00:19:31.015 ] 00:19:31.015 } 00:19:31.015 [2024-11-20 13:39:22.741366] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:19:31.015 [2024-11-20 13:39:22.741562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:19:31.015 [2024-11-20 13:39:22.925493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.015 [2024-11-20 13:39:23.034670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.583 Running I/O for 5 seconds... 00:19:33.464 19740.00 IOPS, 77.11 MiB/s [2024-11-20T13:39:26.436Z] 21546.00 IOPS, 84.16 MiB/s [2024-11-20T13:39:27.811Z] 21299.67 IOPS, 83.20 MiB/s [2024-11-20T13:39:28.782Z] 21318.75 IOPS, 83.28 MiB/s 00:19:36.743 Latency(us) 00:19:36.743 [2024-11-20T13:39:28.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.743 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:36.743 xnvme_bdev : 5.00 22283.89 87.05 0.00 0.00 2864.59 218.76 6404.65 00:19:36.743 [2024-11-20T13:39:28.782Z] =================================================================================================================== 00:19:36.743 [2024-11-20T13:39:28.782Z] Total : 22283.89 87.05 0.00 0.00 2864.59 218.76 6404.65 00:19:37.350 00:19:37.350 real 0m13.534s 00:19:37.350 user 0m5.502s 00:19:37.350 sys 0m5.827s 00:19:37.350 13:39:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.350 13:39:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:37.350 ************************************ 00:19:37.350 END TEST xnvme_bdevperf 00:19:37.350 ************************************ 00:19:37.609 13:39:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:37.609 13:39:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.609 13:39:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.609 13:39:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:37.609 ************************************ 00:19:37.609 START TEST xnvme_fio_plugin 00:19:37.609 ************************************ 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:37.609 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:37.610 13:39:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:37.610 { 00:19:37.610 "subsystems": [ 00:19:37.610 { 00:19:37.610 "subsystem": "bdev", 00:19:37.610 "config": [ 00:19:37.610 { 00:19:37.610 "params": { 00:19:37.610 "io_mechanism": "libaio", 00:19:37.610 "conserve_cpu": false, 00:19:37.610 "filename": "/dev/nvme0n1", 00:19:37.610 "name": "xnvme_bdev" 00:19:37.610 }, 00:19:37.610 "method": "bdev_xnvme_create" 00:19:37.610 }, 00:19:37.610 { 00:19:37.610 "method": "bdev_wait_for_examine" 00:19:37.610 } 00:19:37.610 ] 00:19:37.610 } 00:19:37.610 ] 00:19:37.610 } 00:19:37.869 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:37.869 fio-3.35 00:19:37.869 Starting 1 thread 00:19:44.428 00:19:44.428 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70987: Wed Nov 20 13:39:35 2024 00:19:44.428 read: IOPS=22.4k, BW=87.5MiB/s (91.8MB/s)(438MiB/5001msec) 00:19:44.428 slat (usec): min=5, max=2332, avg=39.49, stdev=34.17 00:19:44.428 clat (usec): min=120, max=7187, avg=1575.54, stdev=928.91 00:19:44.428 lat (usec): min=155, max=7221, avg=1615.03, stdev=933.41 00:19:44.428 clat percentiles (usec): 00:19:44.428 | 1.00th=[ 251], 5.00th=[ 388], 10.00th=[ 510], 20.00th=[ 734], 00:19:44.428 | 30.00th=[ 947], 40.00th=[ 1156], 50.00th=[ 1385], 60.00th=[ 1647], 00:19:44.428 | 70.00th=[ 1991], 80.00th=[ 2376], 90.00th=[ 2933], 95.00th=[ 3326], 00:19:44.428 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4948], 99.95th=[ 6587], 00:19:44.428 | 99.99th=[ 7111] 00:19:44.428 bw ( KiB/s): min=74752, max=106512, per=97.95%, avg=87761.89, stdev=10185.44, samples=9 00:19:44.428 iops : min=18688, max=26628, avg=21940.44, stdev=2546.37, samples=9 00:19:44.428 lat (usec) : 250=0.97%, 500=8.59%, 750=11.19%, 1000=11.74% 00:19:44.428 lat (msec) : 2=37.86%, 4=28.61%, 10=1.04% 00:19:44.428 cpu : usr=25.60%, sys=52.76%, ctx=68, majf=0, minf=764 00:19:44.428 IO depths : 1=0.2%, 2=1.8%, 4=5.1%, 8=11.8%, 16=25.7%, 32=53.8%, >=64=1.7% 00:19:44.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.428 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:44.428 issued rwts: total=112025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.428 00:19:44.428 Run status group 0 (all jobs): 00:19:44.428 READ: bw=87.5MiB/s (91.8MB/s), 87.5MiB/s-87.5MiB/s (91.8MB/s-91.8MB/s), io=438MiB (459MB), run=5001-5001msec 00:19:44.687 ----------------------------------------------------- 00:19:44.687 Suppressions used: 00:19:44.687 count bytes template 00:19:44.687 1 11 /usr/src/fio/parse.c 00:19:44.687 1 8 libtcmalloc_minimal.so 00:19:44.687 1 904 libcrypto.so 00:19:44.687 ----------------------------------------------------- 00:19:44.687 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:44.687 13:39:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:44.687 { 00:19:44.687 "subsystems": [ 00:19:44.687 { 00:19:44.687 "subsystem": "bdev", 00:19:44.687 "config": [ 00:19:44.687 { 00:19:44.687 "params": { 00:19:44.687 "io_mechanism": "libaio", 00:19:44.687 "conserve_cpu": false, 00:19:44.687 "filename": "/dev/nvme0n1", 00:19:44.687 "name": "xnvme_bdev" 00:19:44.687 }, 00:19:44.687 "method": "bdev_xnvme_create" 00:19:44.687 }, 00:19:44.687 { 00:19:44.687 "method": "bdev_wait_for_examine" 00:19:44.687 } 00:19:44.687 ] 00:19:44.687 } 00:19:44.687 ] 00:19:44.687 } 00:19:44.946 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:44.946 fio-3.35 00:19:44.946 Starting 1 thread 00:19:51.521 00:19:51.521 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71083: Wed Nov 20 13:39:42 2024 00:19:51.521 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(536MiB/5001msec); 0 zone resets 00:19:51.521 slat (usec): min=5, max=1605, avg=31.52, stdev=35.12 00:19:51.521 clat (usec): min=121, max=7216, avg=1351.56, stdev=842.92 00:19:51.521 lat (usec): min=182, max=7284, avg=1383.08, stdev=848.90 00:19:51.521 clat percentiles (usec): 00:19:51.521 | 1.00th=[ 255], 5.00th=[ 400], 10.00th=[ 519], 20.00th=[ 676], 00:19:51.521 | 30.00th=[ 816], 40.00th=[ 955], 50.00th=[ 1090], 60.00th=[ 1287], 00:19:51.521 | 70.00th=[ 1565], 80.00th=[ 1991], 90.00th=[ 2638], 95.00th=[ 3097], 00:19:51.521 | 99.00th=[ 3851], 99.50th=[ 4146], 99.90th=[ 5211], 99.95th=[ 6194], 00:19:51.521 | 99.99th=[ 6980] 00:19:51.521 bw ( KiB/s): min=81192, max=139904, per=99.49%, avg=109147.56, stdev=17500.52, samples=9 00:19:51.521 iops : min=20298, max=34976, avg=27286.89, stdev=4375.13, samples=9 00:19:51.521 lat (usec) : 250=0.92%, 500=8.29%, 750=15.75%, 1000=18.36% 00:19:51.521 lat (msec) : 2=36.94%, 4=19.04%, 10=0.71% 00:19:51.521 cpu : usr=27.76%, sys=52.56%, ctx=94, majf=0, minf=764 00:19:51.521 IO depths : 1=0.1%, 2=1.3%, 4=4.1%, 8=10.4%, 16=25.1%, 32=57.1%, >=64=1.9% 00:19:51.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.521 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:51.521 issued rwts: total=0,137156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:51.521 00:19:51.521 Run status group 0 (all jobs): 00:19:51.521 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=536MiB (562MB), run=5001-5001msec 00:19:52.088 ----------------------------------------------------- 00:19:52.088 Suppressions used: 00:19:52.088 count bytes template 00:19:52.088 1 11 /usr/src/fio/parse.c 00:19:52.088 1 8 libtcmalloc_minimal.so 00:19:52.088 1 904 libcrypto.so 00:19:52.088 ----------------------------------------------------- 00:19:52.088 00:19:52.088 00:19:52.088 real 0m14.614s 00:19:52.088 user 0m6.311s 00:19:52.088 sys 0m5.890s 00:19:52.088 13:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.088 13:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:52.088 ************************************ 00:19:52.088 END TEST xnvme_fio_plugin 00:19:52.088 ************************************ 00:19:52.088 13:39:44 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:52.088 13:39:44 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:52.088 13:39:44 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:52.088 13:39:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:52.088 13:39:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:52.088 13:39:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.088 13:39:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.088 ************************************ 00:19:52.088 START TEST xnvme_rpc 00:19:52.088 ************************************ 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71173 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71173 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71173 ']' 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.088 13:39:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.349 [2024-11-20 13:39:44.234717] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:19:52.349 [2024-11-20 13:39:44.234925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71173 ] 00:19:52.608 [2024-11-20 13:39:44.421280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.608 [2024-11-20 13:39:44.552748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.544 xnvme_bdev 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:53.544 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71173 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71173 ']' 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71173 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71173 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71173' 00:19:53.803 killing process with pid 71173 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71173 00:19:53.803 13:39:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71173 00:19:56.330 00:19:56.330 real 0m3.775s 00:19:56.330 user 0m4.129s 00:19:56.330 sys 0m0.483s 00:19:56.330 13:39:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.330 13:39:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.330 ************************************ 00:19:56.330 END TEST xnvme_rpc 00:19:56.330 ************************************ 00:19:56.330 13:39:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:56.330 13:39:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:56.330 13:39:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.330 13:39:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:56.330 ************************************ 00:19:56.330 START TEST xnvme_bdevperf 00:19:56.330 ************************************ 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:56.330 13:39:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:56.330 { 00:19:56.330 "subsystems": [ 00:19:56.330 { 00:19:56.330 "subsystem": "bdev", 00:19:56.330 "config": [ 00:19:56.330 { 00:19:56.330 "params": { 00:19:56.330 "io_mechanism": "libaio", 00:19:56.330 "conserve_cpu": true, 00:19:56.330 "filename": "/dev/nvme0n1", 00:19:56.330 "name": "xnvme_bdev" 00:19:56.330 }, 00:19:56.330 "method": "bdev_xnvme_create" 00:19:56.330 }, 00:19:56.330 { 00:19:56.330 "method": "bdev_wait_for_examine" 00:19:56.330 } 00:19:56.330 ] 00:19:56.330 } 00:19:56.330 ] 00:19:56.330 } 00:19:56.330 [2024-11-20 13:39:48.018720] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:19:56.330 [2024-11-20 13:39:48.018983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71254 ] 00:19:56.330 [2024-11-20 13:39:48.213360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.330 [2024-11-20 13:39:48.330080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.896 Running I/O for 5 seconds... 00:19:58.790 22928.00 IOPS, 89.56 MiB/s [2024-11-20T13:39:51.764Z] 23382.50 IOPS, 91.34 MiB/s [2024-11-20T13:39:52.700Z] 23691.00 IOPS, 92.54 MiB/s [2024-11-20T13:39:54.075Z] 24273.00 IOPS, 94.82 MiB/s 00:20:02.036 Latency(us) 00:20:02.036 [2024-11-20T13:39:54.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.036 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:02.036 xnvme_bdev : 5.00 24097.24 94.13 0.00 0.00 2649.15 247.62 8877.15 00:20:02.036 [2024-11-20T13:39:54.075Z] =================================================================================================================== 00:20:02.036 [2024-11-20T13:39:54.075Z] Total : 24097.24 94.13 0.00 0.00 2649.15 247.62 8877.15 00:20:03.013 13:39:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:03.013 13:39:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:03.013 13:39:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:03.013 13:39:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:03.013 13:39:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:03.013 { 00:20:03.013 "subsystems": [ 00:20:03.013 { 00:20:03.013 "subsystem": "bdev", 00:20:03.013 "config": [ 00:20:03.013 { 00:20:03.013 "params": { 00:20:03.013 "io_mechanism": "libaio", 00:20:03.013 "conserve_cpu": true, 00:20:03.013 "filename": "/dev/nvme0n1", 00:20:03.013 "name": "xnvme_bdev" 00:20:03.013 }, 00:20:03.013 "method": "bdev_xnvme_create" 00:20:03.013 }, 00:20:03.013 { 00:20:03.013 "method": "bdev_wait_for_examine" 00:20:03.013 } 00:20:03.013 ] 00:20:03.013 } 00:20:03.013 ] 00:20:03.013 } 00:20:03.013 [2024-11-20 13:39:54.873272] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:20:03.013 [2024-11-20 13:39:54.873437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71330 ] 00:20:03.271 [2024-11-20 13:39:55.055589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.271 [2024-11-20 13:39:55.182542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.529 Running I/O for 5 seconds... 00:20:05.839 21355.00 IOPS, 83.42 MiB/s [2024-11-20T13:39:58.812Z] 21991.00 IOPS, 85.90 MiB/s [2024-11-20T13:39:59.748Z] 22120.67 IOPS, 86.41 MiB/s [2024-11-20T13:40:00.740Z] 21724.25 IOPS, 84.86 MiB/s 00:20:08.701 Latency(us) 00:20:08.701 [2024-11-20T13:40:00.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.701 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:08.701 xnvme_bdev : 5.00 21934.49 85.68 0.00 0.00 2910.21 240.17 6464.23 00:20:08.701 [2024-11-20T13:40:00.740Z] =================================================================================================================== 00:20:08.701 [2024-11-20T13:40:00.740Z] Total : 21934.49 85.68 0.00 0.00 2910.21 240.17 6464.23 00:20:09.640 00:20:09.640 real 0m13.691s 00:20:09.640 user 0m5.316s 00:20:09.640 sys 0m5.779s 00:20:09.640 13:40:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.640 13:40:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:09.640 ************************************ 00:20:09.640 END TEST xnvme_bdevperf 00:20:09.640 ************************************ 00:20:09.640 13:40:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:09.640 13:40:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:09.640 13:40:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.640 13:40:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:09.640 ************************************ 00:20:09.640 START TEST xnvme_fio_plugin 00:20:09.640 ************************************ 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:09.640 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.641 13:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.899 { 00:20:09.899 "subsystems": [ 00:20:09.899 { 00:20:09.899 "subsystem": "bdev", 00:20:09.899 "config": [ 00:20:09.899 { 00:20:09.899 "params": { 00:20:09.899 "io_mechanism": "libaio", 00:20:09.899 "conserve_cpu": true, 00:20:09.899 "filename": "/dev/nvme0n1", 00:20:09.899 "name": "xnvme_bdev" 00:20:09.899 }, 00:20:09.899 "method": "bdev_xnvme_create" 00:20:09.899 }, 00:20:09.899 { 00:20:09.899 "method": "bdev_wait_for_examine" 00:20:09.899 } 00:20:09.899 ] 00:20:09.899 } 00:20:09.899 ] 00:20:09.899 } 00:20:09.899 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:09.899 fio-3.35 00:20:09.899 Starting 1 thread 00:20:16.541 00:20:16.541 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71455: Wed Nov 20 13:40:07 2024 00:20:16.541 read: IOPS=26.0k, BW=102MiB/s (107MB/s)(508MiB/5001msec) 00:20:16.541 slat (usec): min=5, max=785, avg=33.92, stdev=32.43 00:20:16.541 clat (usec): min=120, max=6850, avg=1386.80, stdev=790.98 00:20:16.541 lat (usec): min=182, max=6896, avg=1420.72, stdev=794.86 00:20:16.541 clat percentiles (usec): 00:20:16.541 | 1.00th=[ 251], 5.00th=[ 383], 10.00th=[ 506], 20.00th=[ 701], 00:20:16.541 | 30.00th=[ 873], 40.00th=[ 1029], 50.00th=[ 1205], 60.00th=[ 1418], 00:20:16.541 | 70.00th=[ 1696], 80.00th=[ 2073], 90.00th=[ 2540], 95.00th=[ 2868], 00:20:16.541 | 99.00th=[ 3589], 99.50th=[ 3916], 99.90th=[ 4621], 99.95th=[ 4948], 00:20:16.541 | 99.99th=[ 5932] 00:20:16.541 bw ( KiB/s): min=91624, max=122088, per=100.00%, avg=104899.89, stdev=11214.92, samples=9 00:20:16.541 iops : min=22906, max=30522, avg=26224.89, stdev=2803.75, samples=9 00:20:16.541 lat (usec) : 250=0.98%, 500=8.86%, 750=12.86%, 1000=15.46% 00:20:16.541 lat (msec) : 2=40.17%, 4=21.24%, 10=0.43% 00:20:16.541 cpu : usr=24.86%, sys=53.40%, ctx=86, majf=0, minf=764 00:20:16.541 IO depths : 1=0.1%, 2=1.5%, 4=4.6%, 8=10.9%, 16=25.3%, 32=55.9%, >=64=1.8% 00:20:16.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.541 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:16.541 issued rwts: total=130108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:16.541 00:20:16.541 Run status group 0 (all jobs): 00:20:16.541 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=508MiB (533MB), run=5001-5001msec 00:20:17.110 ----------------------------------------------------- 00:20:17.110 Suppressions used: 00:20:17.110 count bytes template 00:20:17.110 1 11 /usr/src/fio/parse.c 00:20:17.110 1 8 libtcmalloc_minimal.so 00:20:17.110 1 904 libcrypto.so 00:20:17.110 ----------------------------------------------------- 00:20:17.110 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:17.110 13:40:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:17.110 13:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:17.110 13:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:17.110 13:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:17.110 13:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:17.110 13:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:17.110 { 00:20:17.110 "subsystems": [ 00:20:17.110 { 00:20:17.110 "subsystem": "bdev", 00:20:17.110 "config": [ 00:20:17.110 { 00:20:17.110 "params": { 00:20:17.110 "io_mechanism": "libaio", 00:20:17.110 "conserve_cpu": true, 00:20:17.110 "filename": "/dev/nvme0n1", 00:20:17.110 "name": "xnvme_bdev" 00:20:17.110 }, 00:20:17.110 "method": "bdev_xnvme_create" 00:20:17.110 }, 00:20:17.110 { 00:20:17.110 "method": "bdev_wait_for_examine" 00:20:17.110 } 00:20:17.110 ] 00:20:17.110 } 00:20:17.110 ] 00:20:17.110 } 00:20:17.366 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:17.366 fio-3.35 00:20:17.366 Starting 1 thread 00:20:23.928 00:20:23.928 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71551: Wed Nov 20 13:40:14 2024 00:20:23.928 write: IOPS=25.0k, BW=97.7MiB/s (102MB/s)(489MiB/5001msec); 0 zone resets 00:20:23.928 slat (usec): min=5, max=1096, avg=35.30, stdev=33.97 00:20:23.928 clat (usec): min=122, max=6408, avg=1441.21, stdev=849.67 00:20:23.928 lat (usec): min=194, max=6461, avg=1476.51, stdev=854.76 00:20:23.928 clat percentiles (usec): 00:20:23.928 | 1.00th=[ 258], 5.00th=[ 396], 10.00th=[ 523], 20.00th=[ 734], 00:20:23.928 | 30.00th=[ 898], 40.00th=[ 1057], 50.00th=[ 1221], 60.00th=[ 1434], 00:20:23.928 | 70.00th=[ 1729], 80.00th=[ 2147], 90.00th=[ 2704], 95.00th=[ 3097], 00:20:23.928 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[ 5080], 00:20:23.928 | 99.99th=[ 5669] 00:20:23.928 bw ( KiB/s): min=81512, max=115488, per=99.51%, avg=99593.78, stdev=10058.73, samples=9 00:20:23.928 iops : min=20378, max=28872, avg=24898.44, stdev=2514.68, samples=9 00:20:23.928 lat (usec) : 250=0.84%, 500=8.24%, 750=12.07%, 1000=15.68% 00:20:23.928 lat (msec) : 2=39.98%, 4=22.29%, 10=0.90% 00:20:23.928 cpu : usr=26.88%, sys=51.60%, ctx=67, majf=0, minf=764 00:20:23.928 IO depths : 1=0.1%, 2=1.5%, 4=4.4%, 8=10.8%, 16=25.1%, 32=56.2%, >=64=1.8% 00:20:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.928 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:23.928 issued rwts: total=0,125133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:23.928 00:20:23.928 Run status group 0 (all jobs): 00:20:23.928 WRITE: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=489MiB (513MB), run=5001-5001msec 00:20:24.187 ----------------------------------------------------- 00:20:24.187 Suppressions used: 00:20:24.187 count bytes template 00:20:24.187 1 11 /usr/src/fio/parse.c 00:20:24.187 1 8 libtcmalloc_minimal.so 00:20:24.187 1 904 libcrypto.so 00:20:24.187 ----------------------------------------------------- 00:20:24.187 00:20:24.187 00:20:24.187 real 0m14.533s 00:20:24.187 user 0m6.185s 00:20:24.187 sys 0m5.837s 00:20:24.187 13:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.187 13:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:24.187 ************************************ 00:20:24.187 END TEST xnvme_fio_plugin 00:20:24.187 ************************************ 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:24.187 13:40:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:24.187 13:40:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.187 13:40:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.187 13:40:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:24.187 ************************************ 00:20:24.187 START TEST xnvme_rpc 00:20:24.187 ************************************ 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71633 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71633 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71633 ']' 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:24.187 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.446 13:40:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:24.446 [2024-11-20 13:40:16.350180] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:20:24.446 [2024-11-20 13:40:16.350378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71633 ] 00:20:24.704 [2024-11-20 13:40:16.533748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.704 [2024-11-20 13:40:16.636407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 xnvme_bdev 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71633 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71633 ']' 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71633 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71633 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.899 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.899 killing process with pid 71633 00:20:25.900 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71633' 00:20:25.900 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71633 00:20:25.900 13:40:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71633 00:20:27.831 00:20:27.831 real 0m3.586s 00:20:27.831 user 0m3.954s 00:20:27.831 sys 0m0.440s 00:20:27.831 13:40:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.831 13:40:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.831 ************************************ 00:20:27.831 END TEST xnvme_rpc 00:20:27.831 ************************************ 00:20:27.831 13:40:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:27.831 13:40:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:27.831 13:40:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.831 13:40:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:27.831 ************************************ 00:20:27.831 START TEST xnvme_bdevperf 00:20:27.831 ************************************ 00:20:27.831 13:40:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:27.831 13:40:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:27.831 13:40:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:27.831 13:40:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:27.832 13:40:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:27.832 13:40:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:27.832 13:40:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:27.832 13:40:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:28.090 { 00:20:28.090 "subsystems": [ 00:20:28.090 { 00:20:28.090 "subsystem": "bdev", 00:20:28.090 "config": [ 00:20:28.090 { 00:20:28.090 "params": { 00:20:28.090 "io_mechanism": "io_uring", 00:20:28.090 "conserve_cpu": false, 00:20:28.090 "filename": "/dev/nvme0n1", 00:20:28.090 "name": "xnvme_bdev" 00:20:28.090 }, 00:20:28.090 "method": "bdev_xnvme_create" 00:20:28.090 }, 00:20:28.090 { 00:20:28.090 "method": "bdev_wait_for_examine" 00:20:28.090 } 00:20:28.090 ] 00:20:28.090 } 00:20:28.090 ] 00:20:28.090 } 00:20:28.090 [2024-11-20 13:40:19.960691] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:20:28.090 [2024-11-20 13:40:19.960857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71713 ] 00:20:28.348 [2024-11-20 13:40:20.143975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.348 [2024-11-20 13:40:20.293846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.606 Running I/O for 5 seconds... 00:20:30.918 47280.00 IOPS, 184.69 MiB/s [2024-11-20T13:40:23.893Z] 48540.50 IOPS, 189.61 MiB/s [2024-11-20T13:40:24.829Z] 48175.00 IOPS, 188.18 MiB/s [2024-11-20T13:40:25.763Z] 47270.00 IOPS, 184.65 MiB/s [2024-11-20T13:40:25.763Z] 47103.20 IOPS, 184.00 MiB/s 00:20:33.724 Latency(us) 00:20:33.724 [2024-11-20T13:40:25.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.725 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:33.725 xnvme_bdev : 5.01 47068.66 183.86 0.00 0.00 1355.46 480.35 6136.55 00:20:33.725 [2024-11-20T13:40:25.764Z] =================================================================================================================== 00:20:33.725 [2024-11-20T13:40:25.764Z] Total : 47068.66 183.86 0.00 0.00 1355.46 480.35 6136.55 00:20:34.660 13:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:34.660 13:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:34.660 13:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:34.660 13:40:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:34.660 13:40:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:34.919 { 00:20:34.919 "subsystems": [ 00:20:34.919 { 00:20:34.919 "subsystem": "bdev", 00:20:34.919 "config": [ 00:20:34.919 { 00:20:34.919 "params": { 00:20:34.919 "io_mechanism": "io_uring", 00:20:34.919 "conserve_cpu": false, 00:20:34.919 "filename": "/dev/nvme0n1", 00:20:34.919 "name": "xnvme_bdev" 00:20:34.919 }, 00:20:34.919 "method": "bdev_xnvme_create" 00:20:34.919 }, 00:20:34.919 { 00:20:34.919 "method": "bdev_wait_for_examine" 00:20:34.919 } 00:20:34.919 ] 00:20:34.919 } 00:20:34.919 ] 00:20:34.919 } 00:20:34.919 [2024-11-20 13:40:26.786346] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:20:34.919 [2024-11-20 13:40:26.786541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71794 ] 00:20:35.178 [2024-11-20 13:40:26.972142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.178 [2024-11-20 13:40:27.101503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.435 Running I/O for 5 seconds... 00:20:37.444 44608.00 IOPS, 174.25 MiB/s [2024-11-20T13:40:30.860Z] 44002.50 IOPS, 171.88 MiB/s [2024-11-20T13:40:31.427Z] 43713.67 IOPS, 170.76 MiB/s [2024-11-20T13:40:32.803Z] 43249.25 IOPS, 168.94 MiB/s [2024-11-20T13:40:32.803Z] 43623.40 IOPS, 170.40 MiB/s 00:20:40.764 Latency(us) 00:20:40.764 [2024-11-20T13:40:32.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.764 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:40.764 xnvme_bdev : 5.00 43617.69 170.38 0.00 0.00 1462.38 266.24 6017.40 00:20:40.764 [2024-11-20T13:40:32.803Z] =================================================================================================================== 00:20:40.764 [2024-11-20T13:40:32.803Z] Total : 43617.69 170.38 0.00 0.00 1462.38 266.24 6017.40 00:20:41.699 00:20:41.699 real 0m13.551s 00:20:41.699 user 0m7.048s 00:20:41.699 sys 0m6.288s 00:20:41.700 13:40:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.700 13:40:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:41.700 ************************************ 00:20:41.700 END TEST xnvme_bdevperf 00:20:41.700 ************************************ 00:20:41.700 13:40:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:41.700 13:40:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.700 13:40:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.700 13:40:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.700 ************************************ 00:20:41.700 START TEST xnvme_fio_plugin 00:20:41.700 ************************************ 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.700 13:40:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.700 { 00:20:41.700 "subsystems": [ 00:20:41.700 { 00:20:41.700 "subsystem": "bdev", 00:20:41.700 "config": [ 00:20:41.700 { 00:20:41.700 "params": { 00:20:41.700 "io_mechanism": "io_uring", 00:20:41.700 "conserve_cpu": false, 00:20:41.700 "filename": "/dev/nvme0n1", 00:20:41.700 "name": "xnvme_bdev" 00:20:41.700 }, 00:20:41.700 "method": "bdev_xnvme_create" 00:20:41.700 }, 00:20:41.700 { 00:20:41.700 "method": "bdev_wait_for_examine" 00:20:41.700 } 00:20:41.700 ] 00:20:41.700 } 00:20:41.700 ] 00:20:41.700 } 00:20:41.700 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:41.700 fio-3.35 00:20:41.700 Starting 1 thread 00:20:48.260 00:20:48.260 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71913: Wed Nov 20 13:40:39 2024 00:20:48.260 read: IOPS=47.0k, BW=184MiB/s (193MB/s)(918MiB/5001msec) 00:20:48.260 slat (nsec): min=2935, max=79367, avg=4132.88, stdev=1733.69 00:20:48.260 clat (usec): min=205, max=5460, avg=1196.85, stdev=230.14 00:20:48.260 lat (usec): min=209, max=5464, avg=1200.98, stdev=230.49 00:20:48.260 clat percentiles (usec): 00:20:48.260 | 1.00th=[ 906], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1045], 00:20:48.260 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:20:48.260 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1549], 00:20:48.260 | 99.00th=[ 1909], 99.50th=[ 2343], 99.90th=[ 3589], 99.95th=[ 4015], 00:20:48.260 | 99.99th=[ 4686] 00:20:48.260 bw ( KiB/s): min=180224, max=196096, per=99.35%, avg=186824.00, stdev=4974.73, samples=9 00:20:48.260 iops : min=45056, max=49024, avg=46706.00, stdev=1243.68, samples=9 00:20:48.260 lat (usec) : 250=0.01%, 500=0.03%, 750=0.10%, 1000=9.70% 00:20:48.260 lat (msec) : 2=89.34%, 4=0.77%, 10=0.05% 00:20:48.260 cpu : usr=38.96%, sys=60.00%, ctx=12, majf=0, minf=762 00:20:48.260 IO depths : 1=1.4%, 2=2.9%, 4=5.9%, 8=12.4%, 16=25.1%, 32=50.7%, >=64=1.6% 00:20:48.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.260 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:48.260 issued rwts: total=235101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:48.260 00:20:48.260 Run status group 0 (all jobs): 00:20:48.260 READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=918MiB (963MB), run=5001-5001msec 00:20:48.828 ----------------------------------------------------- 00:20:48.828 Suppressions used: 00:20:48.828 count bytes template 00:20:48.828 1 11 /usr/src/fio/parse.c 00:20:48.828 1 8 libtcmalloc_minimal.so 00:20:48.828 1 904 libcrypto.so 00:20:48.828 ----------------------------------------------------- 00:20:48.828 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.828 13:40:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:48.828 { 00:20:48.828 "subsystems": [ 00:20:48.828 { 00:20:48.828 "subsystem": "bdev", 00:20:48.828 "config": [ 00:20:48.828 { 00:20:48.828 "params": { 00:20:48.828 "io_mechanism": "io_uring", 00:20:48.828 "conserve_cpu": false, 00:20:48.828 "filename": "/dev/nvme0n1", 00:20:48.828 "name": "xnvme_bdev" 00:20:48.828 }, 00:20:48.828 "method": "bdev_xnvme_create" 00:20:48.828 }, 00:20:48.828 { 00:20:48.828 "method": "bdev_wait_for_examine" 00:20:48.828 } 00:20:48.828 ] 00:20:48.828 } 00:20:48.828 ] 00:20:48.828 } 00:20:49.088 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:49.088 fio-3.35 00:20:49.088 Starting 1 thread 00:20:55.646 00:20:55.646 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72005: Wed Nov 20 13:40:46 2024 00:20:55.646 write: IOPS=47.1k, BW=184MiB/s (193MB/s)(921MiB/5001msec); 0 zone resets 00:20:55.646 slat (usec): min=2, max=121, avg= 4.29, stdev= 1.87 00:20:55.646 clat (usec): min=783, max=3665, avg=1182.72, stdev=183.68 00:20:55.646 lat (usec): min=786, max=3687, avg=1187.01, stdev=184.33 00:20:55.646 clat percentiles (usec): 00:20:55.646 | 1.00th=[ 889], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:20:55.646 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:20:55.646 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1549], 00:20:55.646 | 99.00th=[ 1745], 99.50th=[ 1827], 99.90th=[ 2057], 99.95th=[ 2474], 00:20:55.646 | 99.99th=[ 3458] 00:20:55.646 bw ( KiB/s): min=178176, max=203264, per=100.00%, avg=190139.56, stdev=8482.85, samples=9 00:20:55.646 iops : min=44544, max=50816, avg=47534.89, stdev=2120.71, samples=9 00:20:55.646 lat (usec) : 1000=12.23% 00:20:55.646 lat (msec) : 2=87.65%, 4=0.12% 00:20:55.646 cpu : usr=39.68%, sys=59.28%, ctx=13, majf=0, minf=762 00:20:55.646 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:55.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.646 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:55.646 issued rwts: total=0,235776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:55.646 00:20:55.646 Run status group 0 (all jobs): 00:20:55.646 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=921MiB (966MB), run=5001-5001msec 00:20:56.212 ----------------------------------------------------- 00:20:56.212 Suppressions used: 00:20:56.213 count bytes template 00:20:56.213 1 11 /usr/src/fio/parse.c 00:20:56.213 1 8 libtcmalloc_minimal.so 00:20:56.213 1 904 libcrypto.so 00:20:56.213 ----------------------------------------------------- 00:20:56.213 00:20:56.213 00:20:56.213 real 0m14.624s 00:20:56.213 user 0m7.627s 00:20:56.213 sys 0m6.588s 00:20:56.213 13:40:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.213 13:40:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:56.213 ************************************ 00:20:56.213 END TEST xnvme_fio_plugin 00:20:56.213 ************************************ 00:20:56.213 13:40:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:56.213 13:40:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:56.213 13:40:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:56.213 13:40:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:56.213 13:40:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:56.213 13:40:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.213 13:40:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:56.213 ************************************ 00:20:56.213 START TEST xnvme_rpc 00:20:56.213 ************************************ 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72097 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72097 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72097 ']' 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.213 13:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.470 [2024-11-20 13:40:48.256906] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:20:56.470 [2024-11-20 13:40:48.257082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72097 ] 00:20:56.470 [2024-11-20 13:40:48.438156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.729 [2024-11-20 13:40:48.562921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.663 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 xnvme_bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72097 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72097 ']' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72097 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72097 00:20:57.664 killing process with pid 72097 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72097' 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72097 00:20:57.664 13:40:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72097 00:21:00.191 00:21:00.191 real 0m3.597s 00:21:00.191 user 0m3.936s 00:21:00.191 sys 0m0.470s 00:21:00.191 13:40:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.191 ************************************ 00:21:00.191 END TEST xnvme_rpc 00:21:00.191 ************************************ 00:21:00.191 13:40:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:00.191 13:40:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:00.191 13:40:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.191 13:40:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.191 13:40:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:00.191 ************************************ 00:21:00.191 START TEST xnvme_bdevperf 00:21:00.191 ************************************ 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:00.192 13:40:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.192 { 00:21:00.192 "subsystems": [ 00:21:00.192 { 00:21:00.192 "subsystem": "bdev", 00:21:00.192 "config": [ 00:21:00.192 { 00:21:00.192 "params": { 00:21:00.192 "io_mechanism": "io_uring", 00:21:00.192 "conserve_cpu": true, 00:21:00.192 "filename": "/dev/nvme0n1", 00:21:00.192 "name": "xnvme_bdev" 00:21:00.192 }, 00:21:00.192 "method": "bdev_xnvme_create" 00:21:00.192 }, 00:21:00.192 { 00:21:00.192 "method": "bdev_wait_for_examine" 00:21:00.192 } 00:21:00.192 ] 00:21:00.192 } 00:21:00.192 ] 00:21:00.192 } 00:21:00.192 [2024-11-20 13:40:51.881100] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:00.192 [2024-11-20 13:40:51.881267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72171 ] 00:21:00.192 [2024-11-20 13:40:52.063286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.192 [2024-11-20 13:40:52.192761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.757 Running I/O for 5 seconds... 00:21:02.629 47040.00 IOPS, 183.75 MiB/s [2024-11-20T13:40:55.602Z] 49440.00 IOPS, 193.12 MiB/s [2024-11-20T13:40:56.976Z] 49706.67 IOPS, 194.17 MiB/s [2024-11-20T13:40:57.542Z] 50160.00 IOPS, 195.94 MiB/s 00:21:05.503 Latency(us) 00:21:05.503 [2024-11-20T13:40:57.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.503 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:05.503 xnvme_bdev : 5.00 49772.37 194.42 0.00 0.00 1281.82 808.03 5570.56 00:21:05.503 [2024-11-20T13:40:57.542Z] =================================================================================================================== 00:21:05.503 [2024-11-20T13:40:57.542Z] Total : 49772.37 194.42 0.00 0.00 1281.82 808.03 5570.56 00:21:06.880 13:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:06.880 13:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:06.880 13:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:06.880 13:40:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:06.880 13:40:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.880 { 00:21:06.880 "subsystems": [ 00:21:06.880 { 00:21:06.880 "subsystem": "bdev", 00:21:06.880 "config": [ 00:21:06.880 { 00:21:06.880 "params": { 00:21:06.880 "io_mechanism": "io_uring", 00:21:06.880 "conserve_cpu": true, 00:21:06.880 "filename": "/dev/nvme0n1", 00:21:06.880 "name": "xnvme_bdev" 00:21:06.880 }, 00:21:06.880 "method": "bdev_xnvme_create" 00:21:06.880 }, 00:21:06.880 { 00:21:06.880 "method": "bdev_wait_for_examine" 00:21:06.880 } 00:21:06.880 ] 00:21:06.880 } 00:21:06.880 ] 00:21:06.880 } 00:21:06.880 [2024-11-20 13:40:58.659805] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:06.880 [2024-11-20 13:40:58.660299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72252 ] 00:21:06.880 [2024-11-20 13:40:58.842805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.139 [2024-11-20 13:40:58.946640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.395 Running I/O for 5 seconds... 00:21:09.265 48640.00 IOPS, 190.00 MiB/s [2024-11-20T13:41:02.680Z] 47648.00 IOPS, 186.12 MiB/s [2024-11-20T13:41:03.615Z] 47893.33 IOPS, 187.08 MiB/s [2024-11-20T13:41:04.551Z] 47680.00 IOPS, 186.25 MiB/s [2024-11-20T13:41:04.551Z] 47782.40 IOPS, 186.65 MiB/s 00:21:12.512 Latency(us) 00:21:12.512 [2024-11-20T13:41:04.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.512 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:12.512 xnvme_bdev : 5.01 47745.59 186.51 0.00 0.00 1335.95 845.27 4051.32 00:21:12.512 [2024-11-20T13:41:04.551Z] =================================================================================================================== 00:21:12.512 [2024-11-20T13:41:04.551Z] Total : 47745.59 186.51 0.00 0.00 1335.95 845.27 4051.32 00:21:13.448 00:21:13.448 real 0m13.512s 00:21:13.448 user 0m9.453s 00:21:13.448 sys 0m3.540s 00:21:13.448 ************************************ 00:21:13.448 END TEST xnvme_bdevperf 00:21:13.448 ************************************ 00:21:13.448 13:41:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.448 13:41:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:13.448 13:41:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:13.448 13:41:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.448 13:41:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.448 13:41:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:13.448 ************************************ 00:21:13.448 START TEST xnvme_fio_plugin 00:21:13.448 ************************************ 00:21:13.448 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:13.448 13:41:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:13.448 13:41:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:13.449 13:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.449 { 00:21:13.449 "subsystems": [ 00:21:13.449 { 00:21:13.449 "subsystem": "bdev", 00:21:13.449 "config": [ 00:21:13.449 { 00:21:13.449 "params": { 00:21:13.449 "io_mechanism": "io_uring", 00:21:13.449 "conserve_cpu": true, 00:21:13.449 "filename": "/dev/nvme0n1", 00:21:13.449 "name": "xnvme_bdev" 00:21:13.449 }, 00:21:13.449 "method": "bdev_xnvme_create" 00:21:13.449 }, 00:21:13.449 { 00:21:13.449 "method": "bdev_wait_for_examine" 00:21:13.449 } 00:21:13.449 ] 00:21:13.449 } 00:21:13.449 ] 00:21:13.449 } 00:21:13.707 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:13.707 fio-3.35 00:21:13.707 Starting 1 thread 00:21:20.279 00:21:20.279 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72366: Wed Nov 20 13:41:11 2024 00:21:20.279 read: IOPS=46.6k, BW=182MiB/s (191MB/s)(910MiB/5001msec) 00:21:20.279 slat (nsec): min=2792, max=65161, avg=4122.46, stdev=1734.45 00:21:20.279 clat (usec): min=812, max=4767, avg=1209.15, stdev=195.04 00:21:20.279 lat (usec): min=816, max=4776, avg=1213.28, stdev=195.66 00:21:20.279 clat percentiles (usec): 00:21:20.280 | 1.00th=[ 938], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1057], 00:21:20.280 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:21:20.280 | 70.00th=[ 1254], 80.00th=[ 1319], 90.00th=[ 1450], 95.00th=[ 1565], 00:21:20.280 | 99.00th=[ 1811], 99.50th=[ 1926], 99.90th=[ 2409], 99.95th=[ 3818], 00:21:20.280 | 99.99th=[ 4621] 00:21:20.280 bw ( KiB/s): min=170496, max=200704, per=100.00%, avg=187448.89, stdev=10832.29, samples=9 00:21:20.280 iops : min=42624, max=50176, avg=46862.22, stdev=2708.07, samples=9 00:21:20.280 lat (usec) : 1000=6.67% 00:21:20.280 lat (msec) : 2=93.01%, 4=0.29%, 10=0.03% 00:21:20.280 cpu : usr=64.56%, sys=31.36%, ctx=9, majf=0, minf=762 00:21:20.280 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:20.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.280 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:20.280 issued rwts: total=232832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.280 00:21:20.280 Run status group 0 (all jobs): 00:21:20.280 READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=910MiB (954MB), run=5001-5001msec 00:21:20.538 ----------------------------------------------------- 00:21:20.538 Suppressions used: 00:21:20.538 count bytes template 00:21:20.538 1 11 /usr/src/fio/parse.c 00:21:20.538 1 8 libtcmalloc_minimal.so 00:21:20.538 1 904 libcrypto.so 00:21:20.538 ----------------------------------------------------- 00:21:20.538 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:20.538 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:20.796 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:20.797 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:20.797 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:20.797 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:20.797 13:41:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:20.797 { 00:21:20.797 "subsystems": [ 00:21:20.797 { 00:21:20.797 "subsystem": "bdev", 00:21:20.797 "config": [ 00:21:20.797 { 00:21:20.797 "params": { 00:21:20.797 "io_mechanism": "io_uring", 00:21:20.797 "conserve_cpu": true, 00:21:20.797 "filename": "/dev/nvme0n1", 00:21:20.797 "name": "xnvme_bdev" 00:21:20.797 }, 00:21:20.797 "method": "bdev_xnvme_create" 00:21:20.797 }, 00:21:20.797 { 00:21:20.797 "method": "bdev_wait_for_examine" 00:21:20.797 } 00:21:20.797 ] 00:21:20.797 } 00:21:20.797 ] 00:21:20.797 } 00:21:20.797 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:20.797 fio-3.35 00:21:20.797 Starting 1 thread 00:21:27.369 00:21:27.369 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72462: Wed Nov 20 13:41:18 2024 00:21:27.369 write: IOPS=46.7k, BW=182MiB/s (191MB/s)(912MiB/5002msec); 0 zone resets 00:21:27.369 slat (nsec): min=2946, max=70951, avg=4232.14, stdev=1673.88 00:21:27.369 clat (usec): min=738, max=3642, avg=1202.68, stdev=187.33 00:21:27.369 lat (usec): min=742, max=3650, avg=1206.91, stdev=187.80 00:21:27.369 clat percentiles (usec): 00:21:27.369 | 1.00th=[ 914], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1057], 00:21:27.369 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:21:27.369 | 70.00th=[ 1254], 80.00th=[ 1319], 90.00th=[ 1450], 95.00th=[ 1565], 00:21:27.369 | 99.00th=[ 1778], 99.50th=[ 1860], 99.90th=[ 2040], 99.95th=[ 2900], 00:21:27.369 | 99.99th=[ 3556] 00:21:27.369 bw ( KiB/s): min=167936, max=203264, per=100.00%, avg=187960.89, stdev=10988.47, samples=9 00:21:27.369 iops : min=41984, max=50816, avg=46990.22, stdev=2747.12, samples=9 00:21:27.369 lat (usec) : 750=0.01%, 1000=8.61% 00:21:27.369 lat (msec) : 2=91.26%, 4=0.14% 00:21:27.369 cpu : usr=67.33%, sys=28.75%, ctx=39, majf=0, minf=762 00:21:27.369 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:27.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.369 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:27.369 issued rwts: total=0,233408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:27.370 00:21:27.370 Run status group 0 (all jobs): 00:21:27.370 WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=912MiB (956MB), run=5002-5002msec 00:21:28.000 ----------------------------------------------------- 00:21:28.000 Suppressions used: 00:21:28.000 count bytes template 00:21:28.000 1 11 /usr/src/fio/parse.c 00:21:28.000 1 8 libtcmalloc_minimal.so 00:21:28.000 1 904 libcrypto.so 00:21:28.000 ----------------------------------------------------- 00:21:28.000 00:21:28.000 00:21:28.000 real 0m14.551s 00:21:28.000 user 0m10.235s 00:21:28.000 sys 0m3.625s 00:21:28.000 13:41:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.000 ************************************ 00:21:28.000 13:41:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:28.000 END TEST xnvme_fio_plugin 00:21:28.000 ************************************ 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:28.000 13:41:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:28.000 13:41:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:28.000 13:41:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.000 13:41:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:28.000 ************************************ 00:21:28.000 START TEST xnvme_rpc 00:21:28.000 ************************************ 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72555 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72555 00:21:28.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72555 ']' 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.000 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.279 [2024-11-20 13:41:20.077479] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:28.279 [2024-11-20 13:41:20.077852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72555 ] 00:21:28.279 [2024-11-20 13:41:20.264439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.537 [2024-11-20 13:41:20.392008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 xnvme_bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72555 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72555 ']' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72555 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72555 00:21:29.473 killing process with pid 72555 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72555' 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72555 00:21:29.473 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72555 00:21:32.006 00:21:32.006 real 0m3.622s 00:21:32.006 user 0m3.893s 00:21:32.006 sys 0m0.442s 00:21:32.006 13:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.006 ************************************ 00:21:32.006 END TEST xnvme_rpc 00:21:32.006 ************************************ 00:21:32.006 13:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.006 13:41:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:32.006 13:41:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:32.006 13:41:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.006 13:41:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:32.006 ************************************ 00:21:32.006 START TEST xnvme_bdevperf 00:21:32.006 ************************************ 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:32.006 13:41:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:32.006 { 00:21:32.006 "subsystems": [ 00:21:32.006 { 00:21:32.006 "subsystem": "bdev", 00:21:32.006 "config": [ 00:21:32.006 { 00:21:32.006 "params": { 00:21:32.006 "io_mechanism": "io_uring_cmd", 00:21:32.006 "conserve_cpu": false, 00:21:32.006 "filename": "/dev/ng0n1", 00:21:32.006 "name": "xnvme_bdev" 00:21:32.006 }, 00:21:32.006 "method": "bdev_xnvme_create" 00:21:32.006 }, 00:21:32.006 { 00:21:32.006 "method": "bdev_wait_for_examine" 00:21:32.006 } 00:21:32.006 ] 00:21:32.006 } 00:21:32.006 ] 00:21:32.006 } 00:21:32.006 [2024-11-20 13:41:23.723100] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:32.006 [2024-11-20 13:41:23.723273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:21:32.006 [2024-11-20 13:41:23.904583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.006 [2024-11-20 13:41:24.009552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.573 Running I/O for 5 seconds... 00:21:34.511 51328.00 IOPS, 200.50 MiB/s [2024-11-20T13:41:27.486Z] 51360.00 IOPS, 200.62 MiB/s [2024-11-20T13:41:28.421Z] 51285.33 IOPS, 200.33 MiB/s [2024-11-20T13:41:29.355Z] 50688.00 IOPS, 198.00 MiB/s [2024-11-20T13:41:29.355Z] 50355.20 IOPS, 196.70 MiB/s 00:21:37.316 Latency(us) 00:21:37.316 [2024-11-20T13:41:29.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.316 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:37.316 xnvme_bdev : 5.01 50308.82 196.52 0.00 0.00 1268.15 808.03 4230.05 00:21:37.316 [2024-11-20T13:41:29.355Z] =================================================================================================================== 00:21:37.316 [2024-11-20T13:41:29.355Z] Total : 50308.82 196.52 0.00 0.00 1268.15 808.03 4230.05 00:21:38.692 13:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:38.692 13:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:38.692 13:41:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:38.692 13:41:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:38.692 13:41:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:38.692 { 00:21:38.692 "subsystems": [ 00:21:38.692 { 00:21:38.692 "subsystem": "bdev", 00:21:38.692 "config": [ 00:21:38.692 { 00:21:38.692 "params": { 00:21:38.692 "io_mechanism": "io_uring_cmd", 00:21:38.692 "conserve_cpu": false, 00:21:38.692 "filename": "/dev/ng0n1", 00:21:38.692 "name": "xnvme_bdev" 00:21:38.692 }, 00:21:38.692 "method": "bdev_xnvme_create" 00:21:38.692 }, 00:21:38.692 { 00:21:38.692 "method": "bdev_wait_for_examine" 00:21:38.692 } 00:21:38.692 ] 00:21:38.692 } 00:21:38.692 ] 00:21:38.692 } 00:21:38.692 [2024-11-20 13:41:30.501382] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:38.692 [2024-11-20 13:41:30.501687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72711 ] 00:21:38.692 [2024-11-20 13:41:30.678676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.950 [2024-11-20 13:41:30.803032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.213 Running I/O for 5 seconds... 00:21:41.527 51840.00 IOPS, 202.50 MiB/s [2024-11-20T13:41:34.132Z] 51008.00 IOPS, 199.25 MiB/s [2024-11-20T13:41:35.507Z] 50944.00 IOPS, 199.00 MiB/s [2024-11-20T13:41:36.443Z] 50529.75 IOPS, 197.38 MiB/s [2024-11-20T13:41:36.443Z] 50279.80 IOPS, 196.41 MiB/s 00:21:44.404 Latency(us) 00:21:44.404 [2024-11-20T13:41:36.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.404 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:44.404 xnvme_bdev : 5.00 50270.22 196.37 0.00 0.00 1268.79 700.04 4081.11 00:21:44.404 [2024-11-20T13:41:36.443Z] =================================================================================================================== 00:21:44.404 [2024-11-20T13:41:36.443Z] Total : 50270.22 196.37 0.00 0.00 1268.79 700.04 4081.11 00:21:45.339 13:41:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:45.339 13:41:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:45.339 13:41:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:45.339 13:41:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:45.339 13:41:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 { 00:21:45.339 "subsystems": [ 00:21:45.339 { 00:21:45.339 "subsystem": "bdev", 00:21:45.339 "config": [ 00:21:45.339 { 00:21:45.339 "params": { 00:21:45.339 "io_mechanism": "io_uring_cmd", 00:21:45.339 "conserve_cpu": false, 00:21:45.339 "filename": "/dev/ng0n1", 00:21:45.339 "name": "xnvme_bdev" 00:21:45.339 }, 00:21:45.339 "method": "bdev_xnvme_create" 00:21:45.339 }, 00:21:45.339 { 00:21:45.339 "method": "bdev_wait_for_examine" 00:21:45.339 } 00:21:45.339 ] 00:21:45.339 } 00:21:45.339 ] 00:21:45.339 } 00:21:45.339 [2024-11-20 13:41:37.198180] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:45.339 [2024-11-20 13:41:37.198322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72792 ] 00:21:45.339 [2024-11-20 13:41:37.371641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.597 [2024-11-20 13:41:37.473432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.856 Running I/O for 5 seconds... 00:21:48.164 65280.00 IOPS, 255.00 MiB/s [2024-11-20T13:41:41.138Z] 70112.00 IOPS, 273.88 MiB/s [2024-11-20T13:41:42.073Z] 69909.33 IOPS, 273.08 MiB/s [2024-11-20T13:41:43.005Z] 69072.00 IOPS, 269.81 MiB/s 00:21:50.966 Latency(us) 00:21:50.966 [2024-11-20T13:41:43.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.966 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:50.966 xnvme_bdev : 5.00 69086.39 269.87 0.00 0.00 922.23 506.41 3336.38 00:21:50.966 [2024-11-20T13:41:43.005Z] =================================================================================================================== 00:21:50.966 [2024-11-20T13:41:43.005Z] Total : 69086.39 269.87 0.00 0.00 922.23 506.41 3336.38 00:21:51.899 13:41:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:51.899 13:41:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:51.899 13:41:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:51.899 13:41:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:51.899 13:41:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:51.899 { 00:21:51.899 "subsystems": [ 00:21:51.899 { 00:21:51.899 "subsystem": "bdev", 00:21:51.899 "config": [ 00:21:51.899 { 00:21:51.899 "params": { 00:21:51.899 "io_mechanism": "io_uring_cmd", 00:21:51.899 "conserve_cpu": false, 00:21:51.899 "filename": "/dev/ng0n1", 00:21:51.899 "name": "xnvme_bdev" 00:21:51.899 }, 00:21:51.899 "method": "bdev_xnvme_create" 00:21:51.899 }, 00:21:51.899 { 00:21:51.899 "method": "bdev_wait_for_examine" 00:21:51.899 } 00:21:51.899 ] 00:21:51.899 } 00:21:51.899 ] 00:21:51.899 } 00:21:51.899 [2024-11-20 13:41:43.867424] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:21:51.899 [2024-11-20 13:41:43.867570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:21:52.158 [2024-11-20 13:41:44.047352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.158 [2024-11-20 13:41:44.174638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.724 Running I/O for 5 seconds... 00:21:54.693 43288.00 IOPS, 169.09 MiB/s [2024-11-20T13:41:47.670Z] 42673.50 IOPS, 166.69 MiB/s [2024-11-20T13:41:48.614Z] 42993.33 IOPS, 167.94 MiB/s [2024-11-20T13:41:49.572Z] 42728.25 IOPS, 166.91 MiB/s 00:21:57.533 Latency(us) 00:21:57.533 [2024-11-20T13:41:49.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.533 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:57.533 xnvme_bdev : 5.00 42920.56 167.66 0.00 0.00 1486.26 322.09 8340.95 00:21:57.533 [2024-11-20T13:41:49.572Z] =================================================================================================================== 00:21:57.533 [2024-11-20T13:41:49.572Z] Total : 42920.56 167.66 0.00 0.00 1486.26 322.09 8340.95 00:21:58.497 00:21:58.497 real 0m26.897s 00:21:58.497 user 0m15.602s 00:21:58.497 sys 0m10.855s 00:21:58.497 13:41:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.497 13:41:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:58.497 ************************************ 00:21:58.497 END TEST xnvme_bdevperf 00:21:58.497 ************************************ 00:21:58.755 13:41:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:58.755 13:41:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:58.755 13:41:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.755 13:41:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:58.755 ************************************ 00:21:58.755 START TEST xnvme_fio_plugin 00:21:58.755 ************************************ 00:21:58.755 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:58.755 13:41:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:58.755 13:41:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:58.755 13:41:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:58.756 13:41:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.756 { 00:21:58.756 "subsystems": [ 00:21:58.756 { 00:21:58.756 "subsystem": "bdev", 00:21:58.756 "config": [ 00:21:58.756 { 00:21:58.756 "params": { 00:21:58.756 "io_mechanism": "io_uring_cmd", 00:21:58.756 "conserve_cpu": false, 00:21:58.756 "filename": "/dev/ng0n1", 00:21:58.756 "name": "xnvme_bdev" 00:21:58.756 }, 00:21:58.756 "method": "bdev_xnvme_create" 00:21:58.756 }, 00:21:58.756 { 00:21:58.756 "method": "bdev_wait_for_examine" 00:21:58.756 } 00:21:58.756 ] 00:21:58.756 } 00:21:58.756 ] 00:21:58.756 } 00:21:59.013 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:59.013 fio-3.35 00:21:59.013 Starting 1 thread 00:22:05.613 00:22:05.613 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72986: Wed Nov 20 13:41:56 2024 00:22:05.613 read: IOPS=50.9k, BW=199MiB/s (209MB/s)(995MiB/5001msec) 00:22:05.613 slat (usec): min=3, max=117, avg= 3.92, stdev= 1.60 00:22:05.613 clat (usec): min=743, max=5003, avg=1100.43, stdev=170.49 00:22:05.613 lat (usec): min=746, max=5010, avg=1104.35, stdev=170.96 00:22:05.613 clat percentiles (usec): 00:22:05.613 | 1.00th=[ 840], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 979], 00:22:05.613 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1106], 00:22:05.613 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1287], 95.00th=[ 1418], 00:22:05.613 | 99.00th=[ 1631], 99.50th=[ 1696], 99.90th=[ 1893], 99.95th=[ 2474], 00:22:05.613 | 99.99th=[ 4883] 00:22:05.613 bw ( KiB/s): min=193536, max=220672, per=100.00%, avg=204686.22, stdev=9612.03, samples=9 00:22:05.613 iops : min=48384, max=55168, avg=51171.56, stdev=2403.01, samples=9 00:22:05.613 lat (usec) : 750=0.01%, 1000=26.86% 00:22:05.613 lat (msec) : 2=73.06%, 4=0.06%, 10=0.03% 00:22:05.613 cpu : usr=42.28%, sys=56.72%, ctx=15, majf=0, minf=762 00:22:05.613 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:05.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.613 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:05.613 issued rwts: total=254720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:05.613 00:22:05.613 Run status group 0 (all jobs): 00:22:05.613 READ: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=995MiB (1043MB), run=5001-5001msec 00:22:05.872 ----------------------------------------------------- 00:22:05.872 Suppressions used: 00:22:05.872 count bytes template 00:22:05.872 1 11 /usr/src/fio/parse.c 00:22:05.872 1 8 libtcmalloc_minimal.so 00:22:05.872 1 904 libcrypto.so 00:22:05.872 ----------------------------------------------------- 00:22:05.872 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:05.872 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:05.872 { 00:22:05.872 "subsystems": [ 00:22:05.872 { 00:22:05.872 "subsystem": "bdev", 00:22:05.872 "config": [ 00:22:05.872 { 00:22:05.872 "params": { 00:22:05.872 "io_mechanism": "io_uring_cmd", 00:22:05.872 "conserve_cpu": false, 00:22:05.872 "filename": "/dev/ng0n1", 00:22:05.872 "name": "xnvme_bdev" 00:22:05.872 }, 00:22:05.872 "method": "bdev_xnvme_create" 00:22:05.872 }, 00:22:05.872 { 00:22:05.872 "method": "bdev_wait_for_examine" 00:22:05.872 } 00:22:05.872 ] 00:22:05.872 } 00:22:05.872 ] 00:22:05.872 } 00:22:06.130 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:06.130 fio-3.35 00:22:06.130 Starting 1 thread 00:22:12.692 00:22:12.692 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73077: Wed Nov 20 13:42:03 2024 00:22:12.692 write: IOPS=46.3k, BW=181MiB/s (189MB/s)(904MiB/5001msec); 0 zone resets 00:22:12.692 slat (nsec): min=2873, max=84565, avg=4195.97, stdev=2041.30 00:22:12.692 clat (usec): min=328, max=4882, avg=1213.60, stdev=278.95 00:22:12.692 lat (usec): min=332, max=4889, avg=1217.79, stdev=279.50 00:22:12.692 clat percentiles (usec): 00:22:12.692 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1029], 00:22:12.692 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1205], 00:22:12.692 | 70.00th=[ 1254], 80.00th=[ 1319], 90.00th=[ 1483], 95.00th=[ 1696], 00:22:12.692 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2802], 99.95th=[ 4621], 00:22:12.692 | 99.99th=[ 4817] 00:22:12.692 bw ( KiB/s): min=129024, max=207872, per=99.66%, avg=184408.89, stdev=22646.45, samples=9 00:22:12.692 iops : min=32256, max=51968, avg=46102.22, stdev=5661.61, samples=9 00:22:12.692 lat (usec) : 500=0.01%, 1000=13.58% 00:22:12.692 lat (msec) : 2=83.80%, 4=2.56%, 10=0.06% 00:22:12.692 cpu : usr=42.96%, sys=56.04%, ctx=8, majf=0, minf=762 00:22:12.692 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:12.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.692 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:12.693 issued rwts: total=0,231332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:12.693 00:22:12.693 Run status group 0 (all jobs): 00:22:12.693 WRITE: bw=181MiB/s (189MB/s), 181MiB/s-181MiB/s (189MB/s-189MB/s), io=904MiB (948MB), run=5001-5001msec 00:22:12.951 ----------------------------------------------------- 00:22:12.951 Suppressions used: 00:22:12.951 count bytes template 00:22:12.951 1 11 /usr/src/fio/parse.c 00:22:12.951 1 8 libtcmalloc_minimal.so 00:22:12.951 1 904 libcrypto.so 00:22:12.951 ----------------------------------------------------- 00:22:12.951 00:22:13.209 00:22:13.209 real 0m14.434s 00:22:13.209 user 0m7.783s 00:22:13.209 sys 0m6.258s 00:22:13.209 13:42:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.209 13:42:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:13.209 ************************************ 00:22:13.209 END TEST xnvme_fio_plugin 00:22:13.209 ************************************ 00:22:13.209 13:42:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:13.209 13:42:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:13.209 13:42:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:13.209 13:42:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:13.209 13:42:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:13.209 13:42:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.209 13:42:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:13.209 ************************************ 00:22:13.209 START TEST xnvme_rpc 00:22:13.209 ************************************ 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:13.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73157 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73157 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73157 ']' 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:13.209 13:42:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:13.209 [2024-11-20 13:42:05.151597] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:13.209 [2024-11-20 13:42:05.151759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73157 ] 00:22:13.467 [2024-11-20 13:42:05.341506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.467 [2024-11-20 13:42:05.466056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.400 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.400 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:14.400 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 xnvme_bdev 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73157 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73157 ']' 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73157 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73157 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.660 killing process with pid 73157 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73157' 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73157 00:22:14.660 13:42:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73157 00:22:16.563 ************************************ 00:22:16.563 END TEST xnvme_rpc 00:22:16.563 ************************************ 00:22:16.563 00:22:16.563 real 0m3.532s 00:22:16.563 user 0m3.914s 00:22:16.563 sys 0m0.399s 00:22:16.563 13:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.563 13:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:16.822 13:42:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:16.822 13:42:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.822 13:42:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.822 13:42:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:16.822 ************************************ 00:22:16.822 START TEST xnvme_bdevperf 00:22:16.823 ************************************ 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:16.823 13:42:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:16.823 { 00:22:16.823 "subsystems": [ 00:22:16.823 { 00:22:16.823 "subsystem": "bdev", 00:22:16.823 "config": [ 00:22:16.823 { 00:22:16.823 "params": { 00:22:16.823 "io_mechanism": "io_uring_cmd", 00:22:16.823 "conserve_cpu": true, 00:22:16.823 "filename": "/dev/ng0n1", 00:22:16.823 "name": "xnvme_bdev" 00:22:16.823 }, 00:22:16.823 "method": "bdev_xnvme_create" 00:22:16.823 }, 00:22:16.823 { 00:22:16.823 "method": "bdev_wait_for_examine" 00:22:16.823 } 00:22:16.823 ] 00:22:16.823 } 00:22:16.823 ] 00:22:16.823 } 00:22:16.823 [2024-11-20 13:42:08.719536] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:16.823 [2024-11-20 13:42:08.719917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73231 ] 00:22:17.082 [2024-11-20 13:42:08.894347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.082 [2024-11-20 13:42:09.004891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.348 Running I/O for 5 seconds... 00:22:19.684 47360.00 IOPS, 185.00 MiB/s [2024-11-20T13:42:12.656Z] 49888.00 IOPS, 194.88 MiB/s [2024-11-20T13:42:13.589Z] 51712.00 IOPS, 202.00 MiB/s [2024-11-20T13:42:14.523Z] 50400.00 IOPS, 196.88 MiB/s [2024-11-20T13:42:14.523Z] 50636.80 IOPS, 197.80 MiB/s 00:22:22.484 Latency(us) 00:22:22.484 [2024-11-20T13:42:14.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.484 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:22.484 xnvme_bdev : 5.00 50603.71 197.67 0.00 0.00 1260.64 744.73 10545.34 00:22:22.484 [2024-11-20T13:42:14.523Z] =================================================================================================================== 00:22:22.484 [2024-11-20T13:42:14.523Z] Total : 50603.71 197.67 0.00 0.00 1260.64 744.73 10545.34 00:22:23.420 13:42:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:23.420 13:42:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:23.420 13:42:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:23.420 13:42:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:23.420 13:42:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:23.420 { 00:22:23.420 "subsystems": [ 00:22:23.420 { 00:22:23.420 "subsystem": "bdev", 00:22:23.420 "config": [ 00:22:23.420 { 00:22:23.420 "params": { 00:22:23.420 "io_mechanism": "io_uring_cmd", 00:22:23.420 "conserve_cpu": true, 00:22:23.420 "filename": "/dev/ng0n1", 00:22:23.420 "name": "xnvme_bdev" 00:22:23.420 }, 00:22:23.420 "method": "bdev_xnvme_create" 00:22:23.420 }, 00:22:23.420 { 00:22:23.420 "method": "bdev_wait_for_examine" 00:22:23.420 } 00:22:23.420 ] 00:22:23.420 } 00:22:23.420 ] 00:22:23.420 } 00:22:23.679 [2024-11-20 13:42:15.462422] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:23.679 [2024-11-20 13:42:15.462740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73312 ] 00:22:23.679 [2024-11-20 13:42:15.652994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.937 [2024-11-20 13:42:15.777282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.196 Running I/O for 5 seconds... 00:22:26.064 47360.00 IOPS, 185.00 MiB/s [2024-11-20T13:42:19.478Z] 46816.00 IOPS, 182.88 MiB/s [2024-11-20T13:42:20.414Z] 46060.00 IOPS, 179.92 MiB/s [2024-11-20T13:42:21.351Z] 45230.50 IOPS, 176.68 MiB/s [2024-11-20T13:42:21.351Z] 44831.20 IOPS, 175.12 MiB/s 00:22:29.312 Latency(us) 00:22:29.312 [2024-11-20T13:42:21.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.312 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:29.312 xnvme_bdev : 5.00 44809.78 175.04 0.00 0.00 1423.17 80.06 10426.18 00:22:29.312 [2024-11-20T13:42:21.351Z] =================================================================================================================== 00:22:29.312 [2024-11-20T13:42:21.351Z] Total : 44809.78 175.04 0.00 0.00 1423.17 80.06 10426.18 00:22:30.295 13:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:30.295 13:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:30.295 13:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:30.295 13:42:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:30.295 13:42:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:30.295 { 00:22:30.295 "subsystems": [ 00:22:30.295 { 00:22:30.295 "subsystem": "bdev", 00:22:30.295 "config": [ 00:22:30.295 { 00:22:30.295 "params": { 00:22:30.295 "io_mechanism": "io_uring_cmd", 00:22:30.295 "conserve_cpu": true, 00:22:30.295 "filename": "/dev/ng0n1", 00:22:30.295 "name": "xnvme_bdev" 00:22:30.295 }, 00:22:30.295 "method": "bdev_xnvme_create" 00:22:30.295 }, 00:22:30.295 { 00:22:30.295 "method": "bdev_wait_for_examine" 00:22:30.295 } 00:22:30.295 ] 00:22:30.295 } 00:22:30.295 ] 00:22:30.295 } 00:22:30.295 [2024-11-20 13:42:22.284832] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:30.295 [2024-11-20 13:42:22.285031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73392 ] 00:22:30.554 [2024-11-20 13:42:22.475312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.554 [2024-11-20 13:42:22.587310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.122 Running I/O for 5 seconds... 00:22:32.991 66944.00 IOPS, 261.50 MiB/s [2024-11-20T13:42:25.965Z] 69952.00 IOPS, 273.25 MiB/s [2024-11-20T13:42:26.918Z] 67413.33 IOPS, 263.33 MiB/s [2024-11-20T13:42:28.293Z] 66704.00 IOPS, 260.56 MiB/s 00:22:36.254 Latency(us) 00:22:36.254 [2024-11-20T13:42:28.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.254 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:22:36.254 xnvme_bdev : 5.00 66538.70 259.92 0.00 0.00 957.43 491.52 3589.59 00:22:36.254 [2024-11-20T13:42:28.293Z] =================================================================================================================== 00:22:36.254 [2024-11-20T13:42:28.293Z] Total : 66538.70 259.92 0.00 0.00 957.43 491.52 3589.59 00:22:37.190 13:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:37.190 13:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:22:37.190 13:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:37.190 13:42:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:37.190 13:42:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:37.190 { 00:22:37.190 "subsystems": [ 00:22:37.190 { 00:22:37.190 "subsystem": "bdev", 00:22:37.190 "config": [ 00:22:37.190 { 00:22:37.190 "params": { 00:22:37.190 "io_mechanism": "io_uring_cmd", 00:22:37.190 "conserve_cpu": true, 00:22:37.190 "filename": "/dev/ng0n1", 00:22:37.190 "name": "xnvme_bdev" 00:22:37.190 }, 00:22:37.190 "method": "bdev_xnvme_create" 00:22:37.190 }, 00:22:37.190 { 00:22:37.190 "method": "bdev_wait_for_examine" 00:22:37.190 } 00:22:37.190 ] 00:22:37.190 } 00:22:37.190 ] 00:22:37.190 } 00:22:37.190 [2024-11-20 13:42:28.992313] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:37.190 [2024-11-20 13:42:28.992488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73461 ] 00:22:37.190 [2024-11-20 13:42:29.183113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.447 [2024-11-20 13:42:29.350383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.705 Running I/O for 5 seconds... 00:22:39.644 46562.00 IOPS, 181.88 MiB/s [2024-11-20T13:42:33.057Z] 45783.50 IOPS, 178.84 MiB/s [2024-11-20T13:42:33.993Z] 45454.67 IOPS, 177.56 MiB/s [2024-11-20T13:42:34.928Z] 44412.75 IOPS, 173.49 MiB/s [2024-11-20T13:42:34.928Z] 43920.60 IOPS, 171.56 MiB/s 00:22:42.889 Latency(us) 00:22:42.889 [2024-11-20T13:42:34.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.889 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:22:42.889 xnvme_bdev : 5.01 43885.82 171.43 0.00 0.00 1453.81 184.32 12153.95 00:22:42.889 [2024-11-20T13:42:34.928Z] =================================================================================================================== 00:22:42.889 [2024-11-20T13:42:34.928Z] Total : 43885.82 171.43 0.00 0.00 1453.81 184.32 12153.95 00:22:43.825 00:22:43.825 real 0m27.133s 00:22:43.825 user 0m20.868s 00:22:43.825 sys 0m5.108s 00:22:43.825 13:42:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.825 13:42:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 ************************************ 00:22:43.825 END TEST xnvme_bdevperf 00:22:43.825 ************************************ 00:22:43.825 13:42:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:43.825 13:42:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.825 13:42:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.825 13:42:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 ************************************ 00:22:43.825 START TEST xnvme_fio_plugin 00:22:43.825 ************************************ 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:43.825 13:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:43.825 { 00:22:43.825 "subsystems": [ 00:22:43.825 { 00:22:43.825 "subsystem": "bdev", 00:22:43.825 "config": [ 00:22:43.825 { 00:22:43.825 "params": { 00:22:43.825 "io_mechanism": "io_uring_cmd", 00:22:43.825 "conserve_cpu": true, 00:22:43.825 "filename": "/dev/ng0n1", 00:22:43.825 "name": "xnvme_bdev" 00:22:43.825 }, 00:22:43.825 "method": "bdev_xnvme_create" 00:22:43.825 }, 00:22:43.825 { 00:22:43.825 "method": "bdev_wait_for_examine" 00:22:43.825 } 00:22:43.825 ] 00:22:43.825 } 00:22:43.825 ] 00:22:43.825 } 00:22:44.083 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:44.083 fio-3.35 00:22:44.083 Starting 1 thread 00:22:50.641 00:22:50.641 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73585: Wed Nov 20 13:42:41 2024 00:22:50.641 read: IOPS=48.6k, BW=190MiB/s (199MB/s)(950MiB/5001msec) 00:22:50.641 slat (nsec): min=3031, max=60794, avg=4487.07, stdev=2827.63 00:22:50.641 clat (usec): min=698, max=3857, avg=1134.93, stdev=284.54 00:22:50.641 lat (usec): min=701, max=3864, avg=1139.41, stdev=286.37 00:22:50.641 clat percentiles (usec): 00:22:50.641 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 922], 00:22:50.641 | 30.00th=[ 971], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1123], 00:22:50.641 | 70.00th=[ 1188], 80.00th=[ 1287], 90.00th=[ 1516], 95.00th=[ 1745], 00:22:50.641 | 99.00th=[ 2114], 99.50th=[ 2212], 99.90th=[ 2638], 99.95th=[ 3523], 00:22:50.641 | 99.99th=[ 3752] 00:22:50.641 bw ( KiB/s): min=133632, max=232960, per=100.00%, avg=194901.33, stdev=28846.33, samples=9 00:22:50.641 iops : min=33408, max=58240, avg=48725.33, stdev=7211.58, samples=9 00:22:50.641 lat (usec) : 750=0.16%, 1000=36.12% 00:22:50.641 lat (msec) : 2=61.92%, 4=1.81% 00:22:50.641 cpu : usr=67.38%, sys=29.66%, ctx=13, majf=0, minf=762 00:22:50.641 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:50.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.641 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:50.641 issued rwts: total=243264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.641 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:50.641 00:22:50.641 Run status group 0 (all jobs): 00:22:50.641 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=950MiB (996MB), run=5001-5001msec 00:22:51.208 ----------------------------------------------------- 00:22:51.208 Suppressions used: 00:22:51.208 count bytes template 00:22:51.208 1 11 /usr/src/fio/parse.c 00:22:51.208 1 8 libtcmalloc_minimal.so 00:22:51.208 1 904 libcrypto.so 00:22:51.208 ----------------------------------------------------- 00:22:51.208 00:22:51.208 13:42:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:51.208 13:42:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:51.208 13:42:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:51.208 13:42:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:51.209 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:51.466 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:51.466 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:51.466 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:51.466 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:51.466 13:42:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:51.466 { 00:22:51.466 "subsystems": [ 00:22:51.467 { 00:22:51.467 "subsystem": "bdev", 00:22:51.467 "config": [ 00:22:51.467 { 00:22:51.467 "params": { 00:22:51.467 "io_mechanism": "io_uring_cmd", 00:22:51.467 "conserve_cpu": true, 00:22:51.467 "filename": "/dev/ng0n1", 00:22:51.467 "name": "xnvme_bdev" 00:22:51.467 }, 00:22:51.467 "method": "bdev_xnvme_create" 00:22:51.467 }, 00:22:51.467 { 00:22:51.467 "method": "bdev_wait_for_examine" 00:22:51.467 } 00:22:51.467 ] 00:22:51.467 } 00:22:51.467 ] 00:22:51.467 } 00:22:51.467 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:51.467 fio-3.35 00:22:51.467 Starting 1 thread 00:22:58.025 00:22:58.025 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73677: Wed Nov 20 13:42:49 2024 00:22:58.025 write: IOPS=48.6k, BW=190MiB/s (199MB/s)(949MiB/5001msec); 0 zone resets 00:22:58.025 slat (usec): min=3, max=119, avg= 4.24, stdev= 1.82 00:22:58.025 clat (usec): min=758, max=5371, avg=1147.41, stdev=199.74 00:22:58.025 lat (usec): min=761, max=5377, avg=1151.65, stdev=200.24 00:22:58.025 clat percentiles (usec): 00:22:58.025 | 1.00th=[ 857], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:22:58.025 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1156], 00:22:58.025 | 70.00th=[ 1205], 80.00th=[ 1270], 90.00th=[ 1385], 95.00th=[ 1483], 00:22:58.025 | 99.00th=[ 1696], 99.50th=[ 1778], 99.90th=[ 3032], 99.95th=[ 3818], 00:22:58.025 | 99.99th=[ 5276] 00:22:58.025 bw ( KiB/s): min=175776, max=205824, per=100.00%, avg=195203.56, stdev=10718.05, samples=9 00:22:58.025 iops : min=43944, max=51456, avg=48800.89, stdev=2679.51, samples=9 00:22:58.025 lat (usec) : 1000=19.70% 00:22:58.025 lat (msec) : 2=80.08%, 4=0.19%, 10=0.03% 00:22:58.025 cpu : usr=72.84%, sys=24.02%, ctx=14, majf=0, minf=762 00:22:58.025 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:58.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:58.025 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:58.025 issued rwts: total=0,243008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:58.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:58.025 00:22:58.025 Run status group 0 (all jobs): 00:22:58.025 WRITE: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=949MiB (995MB), run=5001-5001msec 00:22:58.592 ----------------------------------------------------- 00:22:58.592 Suppressions used: 00:22:58.592 count bytes template 00:22:58.592 1 11 /usr/src/fio/parse.c 00:22:58.592 1 8 libtcmalloc_minimal.so 00:22:58.592 1 904 libcrypto.so 00:22:58.592 ----------------------------------------------------- 00:22:58.592 00:22:58.592 00:22:58.592 real 0m14.819s 00:22:58.592 user 0m10.911s 00:22:58.592 sys 0m3.311s 00:22:58.592 13:42:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.592 13:42:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:58.592 ************************************ 00:22:58.592 END TEST xnvme_fio_plugin 00:22:58.592 ************************************ 00:22:58.850 13:42:50 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73157 00:22:58.850 13:42:50 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73157 ']' 00:22:58.850 13:42:50 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73157 00:22:58.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73157) - No such process 00:22:58.851 Process with pid 73157 is not found 00:22:58.851 13:42:50 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73157 is not found' 00:22:58.851 00:22:58.851 real 3m45.764s 00:22:58.851 user 2m17.721s 00:22:58.851 sys 1m13.147s 00:22:58.851 13:42:50 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.851 13:42:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:58.851 ************************************ 00:22:58.851 END TEST nvme_xnvme 00:22:58.851 ************************************ 00:22:58.851 13:42:50 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:58.851 13:42:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:58.851 13:42:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.851 13:42:50 -- common/autotest_common.sh@10 -- # set +x 00:22:58.851 ************************************ 00:22:58.851 START TEST blockdev_xnvme 00:22:58.851 ************************************ 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:58.851 * Looking for test storage... 00:22:58.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.851 13:42:50 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.851 --rc genhtml_branch_coverage=1 00:22:58.851 --rc genhtml_function_coverage=1 00:22:58.851 --rc genhtml_legend=1 00:22:58.851 --rc geninfo_all_blocks=1 00:22:58.851 --rc geninfo_unexecuted_blocks=1 00:22:58.851 00:22:58.851 ' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.851 --rc genhtml_branch_coverage=1 00:22:58.851 --rc genhtml_function_coverage=1 00:22:58.851 --rc genhtml_legend=1 00:22:58.851 --rc geninfo_all_blocks=1 00:22:58.851 --rc geninfo_unexecuted_blocks=1 00:22:58.851 00:22:58.851 ' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.851 --rc genhtml_branch_coverage=1 00:22:58.851 --rc genhtml_function_coverage=1 00:22:58.851 --rc genhtml_legend=1 00:22:58.851 --rc geninfo_all_blocks=1 00:22:58.851 --rc geninfo_unexecuted_blocks=1 00:22:58.851 00:22:58.851 ' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.851 --rc genhtml_branch_coverage=1 00:22:58.851 --rc genhtml_function_coverage=1 00:22:58.851 --rc genhtml_legend=1 00:22:58.851 --rc geninfo_all_blocks=1 00:22:58.851 --rc geninfo_unexecuted_blocks=1 00:22:58.851 00:22:58.851 ' 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73817 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73817 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73817 ']' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.851 13:42:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:58.851 13:42:50 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:59.110 [2024-11-20 13:42:50.985853] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:22:59.110 [2024-11-20 13:42:50.986022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:22:59.369 [2024-11-20 13:42:51.158591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.369 [2024-11-20 13:42:51.262018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.304 13:42:52 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.304 13:42:52 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:23:00.305 13:42:52 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:23:00.305 13:42:52 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:23:00.305 13:42:52 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:23:00.305 13:42:52 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:23:00.305 13:42:52 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:00.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:01.197 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:01.197 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:01.197 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:23:01.197 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:23:01.197 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:23:01.197 13:42:53 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.198 nvme0n1 00:23:01.198 nvme0n2 00:23:01.198 nvme0n3 00:23:01.198 nvme1n1 00:23:01.198 nvme2n1 00:23:01.198 nvme3n1 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:23:01.198 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.198 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:23:01.457 13:42:53 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:23:01.457 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:23:01.458 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7d1bcc4f-cb39-43d1-9c88-588289e30d1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d1bcc4f-cb39-43d1-9c88-588289e30d1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "282661a5-d87f-4ea8-b803-b673c84d8324"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "282661a5-d87f-4ea8-b803-b673c84d8324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "97297a9a-b7a8-4a8a-9633-26a7bca25b5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97297a9a-b7a8-4a8a-9633-26a7bca25b5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e9b8b3e9-cdff-46f5-b291-4a90ec426a73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e9b8b3e9-cdff-46f5-b291-4a90ec426a73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "86a859bb-349f-4140-8cc2-74f18690560f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86a859bb-349f-4140-8cc2-74f18690560f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "be5f5041-89cf-4a11-af33-21bf4da36606"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be5f5041-89cf-4a11-af33-21bf4da36606",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:01.458 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:23:01.458 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:23:01.458 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:23:01.458 13:42:53 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73817 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73817 ']' 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73817 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73817 00:23:01.458 killing process with pid 73817 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73817' 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73817 00:23:01.458 13:42:53 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73817 00:23:03.990 13:42:55 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:03.990 13:42:55 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:23:03.990 13:42:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:03.991 13:42:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.991 13:42:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.991 ************************************ 00:23:03.991 START TEST bdev_hello_world 00:23:03.991 ************************************ 00:23:03.991 13:42:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:23:03.991 [2024-11-20 13:42:55.658008] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:03.991 [2024-11-20 13:42:55.658168] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74106 ] 00:23:03.991 [2024-11-20 13:42:55.835268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.991 [2024-11-20 13:42:55.944704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.558 [2024-11-20 13:42:56.357646] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:04.558 [2024-11-20 13:42:56.357741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:23:04.558 [2024-11-20 13:42:56.357782] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:04.558 [2024-11-20 13:42:56.361312] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:04.558 [2024-11-20 13:42:56.361654] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:04.558 [2024-11-20 13:42:56.361716] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:04.558 [2024-11-20 13:42:56.361954] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:04.558 00:23:04.558 [2024-11-20 13:42:56.362032] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:05.494 ************************************ 00:23:05.494 END TEST bdev_hello_world 00:23:05.494 00:23:05.494 real 0m1.837s 00:23:05.494 user 0m1.499s 00:23:05.494 sys 0m0.220s 00:23:05.494 13:42:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.494 13:42:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:05.494 ************************************ 00:23:05.494 13:42:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:23:05.494 13:42:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.494 13:42:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.494 13:42:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:05.494 ************************************ 00:23:05.494 START TEST bdev_bounds 00:23:05.494 ************************************ 00:23:05.494 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:05.494 13:42:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74138 00:23:05.494 13:42:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:05.494 Process bdevio pid: 74138 00:23:05.494 13:42:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74138' 00:23:05.494 13:42:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74138 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74138 ']' 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.495 13:42:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 [2024-11-20 13:42:57.551663] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:05.753 [2024-11-20 13:42:57.551856] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74138 ] 00:23:05.753 [2024-11-20 13:42:57.743966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:06.012 [2024-11-20 13:42:57.864488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.012 [2024-11-20 13:42:57.865935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.012 [2024-11-20 13:42:57.865968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.963 13:42:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.963 13:42:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:06.963 13:42:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:06.963 I/O targets: 00:23:06.963 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:06.963 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:06.963 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:06.963 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:06.963 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:06.963 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:06.963 00:23:06.963 00:23:06.963 CUnit - A unit testing framework for C - Version 2.1-3 00:23:06.963 http://cunit.sourceforge.net/ 00:23:06.963 00:23:06.963 00:23:06.963 Suite: bdevio tests on: nvme3n1 00:23:06.963 Test: blockdev write read block ...passed 00:23:06.963 Test: blockdev write zeroes read block ...passed 00:23:06.963 Test: blockdev write zeroes read no split ...passed 00:23:06.963 Test: blockdev write zeroes read split ...passed 00:23:06.963 Test: blockdev write zeroes read split partial ...passed 00:23:06.963 Test: blockdev reset ...passed 00:23:06.963 Test: blockdev write read 8 blocks ...passed 00:23:06.963 Test: blockdev write read size > 128k ...passed 00:23:06.964 Test: blockdev write read invalid size ...passed 00:23:06.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:06.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:06.964 Test: blockdev write read max offset ...passed 00:23:06.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:06.964 Test: blockdev writev readv 8 blocks ...passed 00:23:06.964 Test: blockdev writev readv 30 x 1block ...passed 00:23:06.964 Test: blockdev writev readv block ...passed 00:23:06.964 Test: blockdev writev readv size > 128k ...passed 00:23:06.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:06.964 Test: blockdev comparev and writev ...passed 00:23:06.964 Test: blockdev nvme passthru rw ...passed 00:23:06.964 Test: blockdev nvme passthru vendor specific ...passed 00:23:06.964 Test: blockdev nvme admin passthru ...passed 00:23:06.964 Test: blockdev copy ...passed 00:23:06.964 Suite: bdevio tests on: nvme2n1 00:23:06.964 Test: blockdev write read block ...passed 00:23:06.964 Test: blockdev write zeroes read block ...passed 00:23:06.964 Test: blockdev write zeroes read no split ...passed 00:23:06.964 Test: blockdev write zeroes read split ...passed 00:23:06.964 Test: blockdev write zeroes read split partial ...passed 00:23:06.964 Test: blockdev reset ...passed 00:23:06.964 Test: blockdev write read 8 blocks ...passed 00:23:06.964 Test: blockdev write read size > 128k ...passed 00:23:06.964 Test: blockdev write read invalid size ...passed 00:23:06.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:06.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:06.964 Test: blockdev write read max offset ...passed 00:23:06.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:06.964 Test: blockdev writev readv 8 blocks ...passed 00:23:06.964 Test: blockdev writev readv 30 x 1block ...passed 00:23:06.964 Test: blockdev writev readv block ...passed 00:23:06.964 Test: blockdev writev readv size > 128k ...passed 00:23:06.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:06.964 Test: blockdev comparev and writev ...passed 00:23:06.964 Test: blockdev nvme passthru rw ...passed 00:23:06.964 Test: blockdev nvme passthru vendor specific ...passed 00:23:06.964 Test: blockdev nvme admin passthru ...passed 00:23:06.964 Test: blockdev copy ...passed 00:23:06.964 Suite: bdevio tests on: nvme1n1 00:23:06.964 Test: blockdev write read block ...passed 00:23:06.964 Test: blockdev write zeroes read block ...passed 00:23:06.964 Test: blockdev write zeroes read no split ...passed 00:23:06.964 Test: blockdev write zeroes read split ...passed 00:23:07.222 Test: blockdev write zeroes read split partial ...passed 00:23:07.222 Test: blockdev reset ...passed 00:23:07.222 Test: blockdev write read 8 blocks ...passed 00:23:07.222 Test: blockdev write read size > 128k ...passed 00:23:07.222 Test: blockdev write read invalid size ...passed 00:23:07.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:07.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:07.222 Test: blockdev write read max offset ...passed 00:23:07.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:07.222 Test: blockdev writev readv 8 blocks ...passed 00:23:07.222 Test: blockdev writev readv 30 x 1block ...passed 00:23:07.222 Test: blockdev writev readv block ...passed 00:23:07.222 Test: blockdev writev readv size > 128k ...passed 00:23:07.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:07.222 Test: blockdev comparev and writev ...passed 00:23:07.222 Test: blockdev nvme passthru rw ...passed 00:23:07.222 Test: blockdev nvme passthru vendor specific ...passed 00:23:07.222 Test: blockdev nvme admin passthru ...passed 00:23:07.222 Test: blockdev copy ...passed 00:23:07.222 Suite: bdevio tests on: nvme0n3 00:23:07.222 Test: blockdev write read block ...passed 00:23:07.222 Test: blockdev write zeroes read block ...passed 00:23:07.222 Test: blockdev write zeroes read no split ...passed 00:23:07.222 Test: blockdev write zeroes read split ...passed 00:23:07.222 Test: blockdev write zeroes read split partial ...passed 00:23:07.222 Test: blockdev reset ...passed 00:23:07.222 Test: blockdev write read 8 blocks ...passed 00:23:07.222 Test: blockdev write read size > 128k ...passed 00:23:07.222 Test: blockdev write read invalid size ...passed 00:23:07.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:07.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:07.222 Test: blockdev write read max offset ...passed 00:23:07.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:07.222 Test: blockdev writev readv 8 blocks ...passed 00:23:07.222 Test: blockdev writev readv 30 x 1block ...passed 00:23:07.222 Test: blockdev writev readv block ...passed 00:23:07.222 Test: blockdev writev readv size > 128k ...passed 00:23:07.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:07.222 Test: blockdev comparev and writev ...passed 00:23:07.222 Test: blockdev nvme passthru rw ...passed 00:23:07.222 Test: blockdev nvme passthru vendor specific ...passed 00:23:07.222 Test: blockdev nvme admin passthru ...passed 00:23:07.222 Test: blockdev copy ...passed 00:23:07.222 Suite: bdevio tests on: nvme0n2 00:23:07.222 Test: blockdev write read block ...passed 00:23:07.222 Test: blockdev write zeroes read block ...passed 00:23:07.222 Test: blockdev write zeroes read no split ...passed 00:23:07.222 Test: blockdev write zeroes read split ...passed 00:23:07.222 Test: blockdev write zeroes read split partial ...passed 00:23:07.222 Test: blockdev reset ...passed 00:23:07.222 Test: blockdev write read 8 blocks ...passed 00:23:07.222 Test: blockdev write read size > 128k ...passed 00:23:07.222 Test: blockdev write read invalid size ...passed 00:23:07.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:07.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:07.222 Test: blockdev write read max offset ...passed 00:23:07.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:07.222 Test: blockdev writev readv 8 blocks ...passed 00:23:07.222 Test: blockdev writev readv 30 x 1block ...passed 00:23:07.222 Test: blockdev writev readv block ...passed 00:23:07.222 Test: blockdev writev readv size > 128k ...passed 00:23:07.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:07.222 Test: blockdev comparev and writev ...passed 00:23:07.222 Test: blockdev nvme passthru rw ...passed 00:23:07.222 Test: blockdev nvme passthru vendor specific ...passed 00:23:07.223 Test: blockdev nvme admin passthru ...passed 00:23:07.223 Test: blockdev copy ...passed 00:23:07.223 Suite: bdevio tests on: nvme0n1 00:23:07.223 Test: blockdev write read block ...passed 00:23:07.223 Test: blockdev write zeroes read block ...passed 00:23:07.223 Test: blockdev write zeroes read no split ...passed 00:23:07.223 Test: blockdev write zeroes read split ...passed 00:23:07.223 Test: blockdev write zeroes read split partial ...passed 00:23:07.223 Test: blockdev reset ...passed 00:23:07.223 Test: blockdev write read 8 blocks ...passed 00:23:07.223 Test: blockdev write read size > 128k ...passed 00:23:07.223 Test: blockdev write read invalid size ...passed 00:23:07.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:07.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:07.223 Test: blockdev write read max offset ...passed 00:23:07.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:07.223 Test: blockdev writev readv 8 blocks ...passed 00:23:07.223 Test: blockdev writev readv 30 x 1block ...passed 00:23:07.223 Test: blockdev writev readv block ...passed 00:23:07.223 Test: blockdev writev readv size > 128k ...passed 00:23:07.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:07.223 Test: blockdev comparev and writev ...passed 00:23:07.223 Test: blockdev nvme passthru rw ...passed 00:23:07.223 Test: blockdev nvme passthru vendor specific ...passed 00:23:07.223 Test: blockdev nvme admin passthru ...passed 00:23:07.223 Test: blockdev copy ...passed 00:23:07.223 00:23:07.223 Run Summary: Type Total Ran Passed Failed Inactive 00:23:07.223 suites 6 6 n/a 0 0 00:23:07.223 tests 138 138 138 0 0 00:23:07.223 asserts 780 780 780 0 n/a 00:23:07.223 00:23:07.223 Elapsed time = 1.246 seconds 00:23:07.223 0 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74138 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74138 ']' 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74138 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.223 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74138 00:23:07.480 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.480 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.480 killing process with pid 74138 00:23:07.480 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74138' 00:23:07.480 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74138 00:23:07.480 13:42:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74138 00:23:08.414 13:43:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:08.414 00:23:08.414 real 0m2.839s 00:23:08.414 user 0m7.346s 00:23:08.414 sys 0m0.372s 00:23:08.414 13:43:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.414 13:43:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:08.414 ************************************ 00:23:08.414 END TEST bdev_bounds 00:23:08.414 ************************************ 00:23:08.414 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:08.414 13:43:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:08.414 13:43:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.414 13:43:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:08.414 ************************************ 00:23:08.414 START TEST bdev_nbd 00:23:08.414 ************************************ 00:23:08.414 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:08.414 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:08.414 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:08.414 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74203 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74203 /var/tmp/spdk-nbd.sock 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74203 ']' 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:08.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.415 13:43:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:08.415 [2024-11-20 13:43:00.452273] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:08.415 [2024-11-20 13:43:00.452499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.673 [2024-11-20 13:43:00.655425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.931 [2024-11-20 13:43:00.789602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:09.498 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.065 1+0 records in 00:23:10.065 1+0 records out 00:23:10.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481153 s, 8.5 MB/s 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:10.065 13:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.324 1+0 records in 00:23:10.324 1+0 records out 00:23:10.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543003 s, 7.5 MB/s 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:10.324 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.582 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.582 1+0 records in 00:23:10.582 1+0 records out 00:23:10.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630003 s, 6.5 MB/s 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:10.583 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.841 1+0 records in 00:23:10.841 1+0 records out 00:23:10.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683292 s, 6.0 MB/s 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:10.841 13:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.408 1+0 records in 00:23:11.408 1+0 records out 00:23:11.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768734 s, 5.3 MB/s 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:11.408 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:11.666 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.667 1+0 records in 00:23:11.667 1+0 records out 00:23:11.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626306 s, 6.5 MB/s 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:11.667 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd0", 00:23:11.924 "bdev_name": "nvme0n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd1", 00:23:11.924 "bdev_name": "nvme0n2" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd2", 00:23:11.924 "bdev_name": "nvme0n3" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd3", 00:23:11.924 "bdev_name": "nvme1n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd4", 00:23:11.924 "bdev_name": "nvme2n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd5", 00:23:11.924 "bdev_name": "nvme3n1" 00:23:11.924 } 00:23:11.924 ]' 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd0", 00:23:11.924 "bdev_name": "nvme0n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd1", 00:23:11.924 "bdev_name": "nvme0n2" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd2", 00:23:11.924 "bdev_name": "nvme0n3" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd3", 00:23:11.924 "bdev_name": "nvme1n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd4", 00:23:11.924 "bdev_name": "nvme2n1" 00:23:11.924 }, 00:23:11.924 { 00:23:11.924 "nbd_device": "/dev/nbd5", 00:23:11.924 "bdev_name": "nvme3n1" 00:23:11.924 } 00:23:11.924 ]' 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.924 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:11.925 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.925 13:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:12.197 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.466 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.723 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.981 13:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.240 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.498 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:13.757 13:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:14.323 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:14.582 /dev/nbd0 00:23:14.582 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:14.582 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:14.583 1+0 records in 00:23:14.583 1+0 records out 00:23:14.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407422 s, 10.1 MB/s 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:14.583 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:23:14.841 /dev/nbd1 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:14.841 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:14.841 1+0 records in 00:23:14.841 1+0 records out 00:23:14.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507631 s, 8.1 MB/s 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:15.100 13:43:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:23:15.360 /dev/nbd10 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:15.360 1+0 records in 00:23:15.360 1+0 records out 00:23:15.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658145 s, 6.2 MB/s 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:15.360 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:23:15.619 /dev/nbd11 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:15.619 1+0 records in 00:23:15.619 1+0 records out 00:23:15.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161561 s, 2.5 MB/s 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:15.619 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:23:15.877 /dev/nbd12 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:15.877 1+0 records in 00:23:15.877 1+0 records out 00:23:15.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606575 s, 6.8 MB/s 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:15.877 13:43:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:16.443 /dev/nbd13 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.443 1+0 records in 00:23:16.443 1+0 records out 00:23:16.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725721 s, 5.6 MB/s 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.443 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:16.444 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd0", 00:23:16.702 "bdev_name": "nvme0n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd1", 00:23:16.702 "bdev_name": "nvme0n2" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd10", 00:23:16.702 "bdev_name": "nvme0n3" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd11", 00:23:16.702 "bdev_name": "nvme1n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd12", 00:23:16.702 "bdev_name": "nvme2n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd13", 00:23:16.702 "bdev_name": "nvme3n1" 00:23:16.702 } 00:23:16.702 ]' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd0", 00:23:16.702 "bdev_name": "nvme0n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd1", 00:23:16.702 "bdev_name": "nvme0n2" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd10", 00:23:16.702 "bdev_name": "nvme0n3" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd11", 00:23:16.702 "bdev_name": "nvme1n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd12", 00:23:16.702 "bdev_name": "nvme2n1" 00:23:16.702 }, 00:23:16.702 { 00:23:16.702 "nbd_device": "/dev/nbd13", 00:23:16.702 "bdev_name": "nvme3n1" 00:23:16.702 } 00:23:16.702 ]' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:16.702 /dev/nbd1 00:23:16.702 /dev/nbd10 00:23:16.702 /dev/nbd11 00:23:16.702 /dev/nbd12 00:23:16.702 /dev/nbd13' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:16.702 /dev/nbd1 00:23:16.702 /dev/nbd10 00:23:16.702 /dev/nbd11 00:23:16.702 /dev/nbd12 00:23:16.702 /dev/nbd13' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:16.702 256+0 records in 00:23:16.702 256+0 records out 00:23:16.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775197 s, 135 MB/s 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:16.702 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:16.961 256+0 records in 00:23:16.961 256+0 records out 00:23:16.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122609 s, 8.6 MB/s 00:23:16.961 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:16.961 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:16.961 256+0 records in 00:23:16.961 256+0 records out 00:23:16.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141012 s, 7.4 MB/s 00:23:16.961 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:16.961 13:43:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:17.219 256+0 records in 00:23:17.219 256+0 records out 00:23:17.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150437 s, 7.0 MB/s 00:23:17.219 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:17.219 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:17.219 256+0 records in 00:23:17.219 256+0 records out 00:23:17.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161807 s, 6.5 MB/s 00:23:17.219 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:17.219 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:17.476 256+0 records in 00:23:17.476 256+0 records out 00:23:17.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145679 s, 7.2 MB/s 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:17.476 256+0 records in 00:23:17.476 256+0 records out 00:23:17.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121517 s, 8.6 MB/s 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.476 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.735 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.994 13:43:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:18.253 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:18.511 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:18.771 13:43:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.338 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:19.596 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:19.854 13:43:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:20.420 malloc_lvol_verify 00:23:20.420 13:43:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:20.678 5ab8a81e-7d25-45a4-9e4e-bacf11bedc9b 00:23:20.678 13:43:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:20.936 bd46d7bf-6f35-48f0-9448-30306cf23e6e 00:23:20.936 13:43:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:21.195 /dev/nbd0 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:21.195 mke2fs 1.47.0 (5-Feb-2023) 00:23:21.195 Discarding device blocks: 0/4096 done 00:23:21.195 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:21.195 00:23:21.195 Allocating group tables: 0/1 done 00:23:21.195 Writing inode tables: 0/1 done 00:23:21.195 Creating journal (1024 blocks): done 00:23:21.195 Writing superblocks and filesystem accounting information: 0/1 done 00:23:21.195 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.195 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74203 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74203 ']' 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74203 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74203 00:23:21.453 killing process with pid 74203 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74203' 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74203 00:23:21.453 13:43:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74203 00:23:22.864 ************************************ 00:23:22.864 END TEST bdev_nbd 00:23:22.864 ************************************ 00:23:22.864 13:43:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:22.864 00:23:22.864 real 0m14.272s 00:23:22.864 user 0m20.691s 00:23:22.864 sys 0m4.472s 00:23:22.864 13:43:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.864 13:43:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:22.864 13:43:14 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:22.864 13:43:14 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:23:22.864 13:43:14 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:23:22.864 13:43:14 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:23:22.864 13:43:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.864 13:43:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.864 13:43:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:22.864 ************************************ 00:23:22.864 START TEST bdev_fio 00:23:22.864 ************************************ 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:22.864 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:22.864 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:22.865 ************************************ 00:23:22.865 START TEST bdev_fio_rw_verify 00:23:22.865 ************************************ 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:22.865 13:43:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:23.123 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:23.123 fio-3.35 00:23:23.123 Starting 6 threads 00:23:35.355 00:23:35.355 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74648: Wed Nov 20 13:43:26 2024 00:23:35.355 read: IOPS=25.3k, BW=99.0MiB/s (104MB/s)(990MiB/10001msec) 00:23:35.355 slat (usec): min=3, max=1693, avg= 7.39, stdev= 6.81 00:23:35.355 clat (usec): min=141, max=14345, avg=725.86, stdev=449.62 00:23:35.355 lat (usec): min=146, max=14355, avg=733.25, stdev=450.15 00:23:35.355 clat percentiles (usec): 00:23:35.355 | 50.000th=[ 709], 99.000th=[ 2343], 99.900th=[ 5604], 99.990th=[12911], 00:23:35.355 | 99.999th=[14353] 00:23:35.355 write: IOPS=25.6k, BW=100MiB/s (105MB/s)(1001MiB/10001msec); 0 zone resets 00:23:35.355 slat (usec): min=14, max=5289, avg=30.13, stdev=34.70 00:23:35.355 clat (usec): min=112, max=14468, avg=830.16, stdev=468.66 00:23:35.355 lat (usec): min=132, max=14505, avg=860.29, stdev=471.04 00:23:35.355 clat percentiles (usec): 00:23:35.355 | 50.000th=[ 807], 99.000th=[ 2704], 99.900th=[ 5342], 99.990th=[13435], 00:23:35.355 | 99.999th=[14353] 00:23:35.355 bw ( KiB/s): min=84192, max=131552, per=100.00%, avg=102672.16, stdev=1871.40, samples=114 00:23:35.355 iops : min=21048, max=32888, avg=25667.89, stdev=467.86, samples=114 00:23:35.355 lat (usec) : 250=2.11%, 500=16.53%, 750=30.31%, 1000=36.65% 00:23:35.355 lat (msec) : 2=13.11%, 4=0.90%, 10=0.37%, 20=0.02% 00:23:35.355 cpu : usr=61.45%, sys=25.84%, ctx=6514, majf=0, minf=22150 00:23:35.355 IO depths : 1=12.3%, 2=24.9%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.355 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.355 issued rwts: total=253444,256222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.355 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:35.355 00:23:35.355 Run status group 0 (all jobs): 00:23:35.355 READ: bw=99.0MiB/s (104MB/s), 99.0MiB/s-99.0MiB/s (104MB/s-104MB/s), io=990MiB (1038MB), run=10001-10001msec 00:23:35.355 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=1001MiB (1049MB), run=10001-10001msec 00:23:36.287 ----------------------------------------------------- 00:23:36.287 Suppressions used: 00:23:36.287 count bytes template 00:23:36.287 6 48 /usr/src/fio/parse.c 00:23:36.287 2610 250560 /usr/src/fio/iolog.c 00:23:36.287 1 8 libtcmalloc_minimal.so 00:23:36.287 1 904 libcrypto.so 00:23:36.287 ----------------------------------------------------- 00:23:36.287 00:23:36.287 00:23:36.287 real 0m13.580s 00:23:36.287 user 0m38.866s 00:23:36.287 sys 0m15.866s 00:23:36.287 13:43:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.287 ************************************ 00:23:36.287 13:43:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:36.287 END TEST bdev_fio_rw_verify 00:23:36.287 ************************************ 00:23:36.287 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:36.545 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7d1bcc4f-cb39-43d1-9c88-588289e30d1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d1bcc4f-cb39-43d1-9c88-588289e30d1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "282661a5-d87f-4ea8-b803-b673c84d8324"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "282661a5-d87f-4ea8-b803-b673c84d8324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "97297a9a-b7a8-4a8a-9633-26a7bca25b5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97297a9a-b7a8-4a8a-9633-26a7bca25b5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e9b8b3e9-cdff-46f5-b291-4a90ec426a73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e9b8b3e9-cdff-46f5-b291-4a90ec426a73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "86a859bb-349f-4140-8cc2-74f18690560f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86a859bb-349f-4140-8cc2-74f18690560f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "be5f5041-89cf-4a11-af33-21bf4da36606"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be5f5041-89cf-4a11-af33-21bf4da36606",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:36.546 /home/vagrant/spdk_repo/spdk 00:23:36.546 ************************************ 00:23:36.546 END TEST bdev_fio 00:23:36.546 ************************************ 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:36.546 00:23:36.546 real 0m13.766s 00:23:36.546 user 0m38.974s 00:23:36.546 sys 0m15.942s 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.546 13:43:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:36.546 13:43:28 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:36.546 13:43:28 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:36.546 13:43:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:36.546 13:43:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.546 13:43:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:36.546 ************************************ 00:23:36.546 START TEST bdev_verify 00:23:36.546 ************************************ 00:23:36.546 13:43:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:36.546 [2024-11-20 13:43:28.526094] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:36.546 [2024-11-20 13:43:28.526245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74822 ] 00:23:36.804 [2024-11-20 13:43:28.704933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:36.804 [2024-11-20 13:43:28.812259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.804 [2024-11-20 13:43:28.812259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.369 Running I/O for 5 seconds... 00:23:39.734 22816.00 IOPS, 89.12 MiB/s [2024-11-20T13:43:32.707Z] 23120.00 IOPS, 90.31 MiB/s [2024-11-20T13:43:33.642Z] 22346.67 IOPS, 87.29 MiB/s [2024-11-20T13:43:34.580Z] 22048.00 IOPS, 86.12 MiB/s [2024-11-20T13:43:34.580Z] 21222.40 IOPS, 82.90 MiB/s 00:23:42.541 Latency(us) 00:23:42.541 [2024-11-20T13:43:34.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.541 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0x80000 00:23:42.541 nvme0n1 : 5.04 1599.02 6.25 0.00 0.00 79897.73 13583.83 92941.96 00:23:42.541 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x80000 length 0x80000 00:23:42.541 nvme0n1 : 5.04 1472.93 5.75 0.00 0.00 86734.99 15013.70 111053.73 00:23:42.541 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0x80000 00:23:42.541 nvme0n2 : 5.05 1597.81 6.24 0.00 0.00 79793.24 17158.52 79119.83 00:23:42.541 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x80000 length 0x80000 00:23:42.541 nvme0n2 : 5.04 1472.27 5.75 0.00 0.00 86594.85 21448.15 95801.72 00:23:42.541 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0x80000 00:23:42.541 nvme0n3 : 5.07 1614.36 6.31 0.00 0.00 78815.93 12809.31 97231.59 00:23:42.541 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x80000 length 0x80000 00:23:42.541 nvme0n3 : 5.08 1486.59 5.81 0.00 0.00 85591.32 15609.48 86745.83 00:23:42.541 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0x20000 00:23:42.541 nvme1n1 : 5.05 1596.21 6.24 0.00 0.00 79547.95 10783.65 90082.21 00:23:42.541 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x20000 length 0x20000 00:23:42.541 nvme1n1 : 5.05 1471.01 5.75 0.00 0.00 86314.07 14596.65 82456.20 00:23:42.541 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0xbd0bd 00:23:42.541 nvme2n1 : 5.07 2867.82 11.20 0.00 0.00 44166.60 4140.68 78643.20 00:23:42.541 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:42.541 nvme2n1 : 5.08 2682.62 10.48 0.00 0.00 47133.51 4438.57 82932.83 00:23:42.541 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0x0 length 0xa0000 00:23:42.541 nvme3n1 : 5.07 1615.67 6.31 0.00 0.00 78298.98 4706.68 100567.97 00:23:42.541 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.541 Verification LBA range: start 0xa0000 length 0xa0000 00:23:42.541 nvme3n1 : 5.08 1487.95 5.81 0.00 0.00 84936.94 13464.67 106287.48 00:23:42.541 [2024-11-20T13:43:34.580Z] =================================================================================================================== 00:23:42.541 [2024-11-20T13:43:34.580Z] Total : 20964.26 81.89 0.00 0.00 72714.77 4140.68 111053.73 00:23:43.476 00:23:43.476 real 0m7.042s 00:23:43.476 user 0m10.973s 00:23:43.476 sys 0m1.895s 00:23:43.476 13:43:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.476 13:43:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:43.476 ************************************ 00:23:43.476 END TEST bdev_verify 00:23:43.476 ************************************ 00:23:43.476 13:43:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:43.476 13:43:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:43.734 13:43:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.734 13:43:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:43.734 ************************************ 00:23:43.734 START TEST bdev_verify_big_io 00:23:43.734 ************************************ 00:23:43.734 13:43:35 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:43.734 [2024-11-20 13:43:35.651977] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:43.734 [2024-11-20 13:43:35.652456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:23:43.992 [2024-11-20 13:43:35.856046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:44.250 [2024-11-20 13:43:36.065911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.250 [2024-11-20 13:43:36.065919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.816 Running I/O for 5 seconds... 00:23:50.945 1256.00 IOPS, 78.50 MiB/s [2024-11-20T13:43:42.984Z] 3375.00 IOPS, 210.94 MiB/s 00:23:50.945 Latency(us) 00:23:50.945 [2024-11-20T13:43:42.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.945 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0x8000 00:23:50.945 nvme0n1 : 5.99 128.20 8.01 0.00 0.00 982786.95 23950.43 945624.90 00:23:50.945 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x8000 length 0x8000 00:23:50.945 nvme0n1 : 5.97 112.57 7.04 0.00 0.00 1071303.66 166818.91 1853119.77 00:23:50.945 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0x8000 00:23:50.945 nvme0n2 : 6.00 137.25 8.58 0.00 0.00 894707.54 31933.91 835047.80 00:23:50.945 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x8000 length 0x8000 00:23:50.945 nvme0n2 : 5.99 149.58 9.35 0.00 0.00 807438.36 66727.56 758787.72 00:23:50.945 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0x8000 00:23:50.945 nvme0n3 : 6.01 122.52 7.66 0.00 0.00 978054.62 9889.98 1784485.70 00:23:50.945 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x8000 length 0x8000 00:23:50.945 nvme0n3 : 6.00 138.75 8.67 0.00 0.00 851751.74 20971.52 865551.83 00:23:50.945 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0x2000 00:23:50.945 nvme1n1 : 6.01 135.77 8.49 0.00 0.00 855759.61 10843.23 1418437.35 00:23:50.945 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x2000 length 0x2000 00:23:50.945 nvme1n1 : 5.98 111.10 6.94 0.00 0.00 1027356.84 84839.33 2455574.34 00:23:50.945 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0xbd0b 00:23:50.945 nvme2n1 : 6.01 106.44 6.65 0.00 0.00 1056268.64 7328.12 2806370.68 00:23:50.945 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:50.945 nvme2n1 : 6.00 109.25 6.83 0.00 0.00 1015181.50 7208.96 2592842.47 00:23:50.945 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0x0 length 0xa000 00:23:50.945 nvme3n1 : 6.02 130.30 8.14 0.00 0.00 835786.90 11915.64 1151527.10 00:23:50.945 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:50.945 Verification LBA range: start 0xa000 length 0xa000 00:23:50.945 nvme3n1 : 6.02 140.97 8.81 0.00 0.00 766044.23 6881.28 846486.81 00:23:50.945 [2024-11-20T13:43:42.984Z] =================================================================================================================== 00:23:50.945 [2024-11-20T13:43:42.985Z] Total : 1522.70 95.17 0.00 0.00 918476.80 6881.28 2806370.68 00:23:52.316 00:23:52.316 real 0m8.523s 00:23:52.316 user 0m15.347s 00:23:52.316 sys 0m0.550s 00:23:52.316 13:43:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.316 13:43:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.316 ************************************ 00:23:52.316 END TEST bdev_verify_big_io 00:23:52.316 ************************************ 00:23:52.316 13:43:44 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:52.316 13:43:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:52.316 13:43:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.316 13:43:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:52.316 ************************************ 00:23:52.316 START TEST bdev_write_zeroes 00:23:52.316 ************************************ 00:23:52.316 13:43:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:52.316 [2024-11-20 13:43:44.208681] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:52.316 [2024-11-20 13:43:44.209256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75031 ] 00:23:52.575 [2024-11-20 13:43:44.437970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.575 [2024-11-20 13:43:44.560480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.142 Running I/O for 1 seconds... 00:23:54.083 54208.00 IOPS, 211.75 MiB/s 00:23:54.083 Latency(us) 00:23:54.083 [2024-11-20T13:43:46.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.083 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme0n1 : 1.03 8093.43 31.61 0.00 0.00 15797.48 7745.16 31933.91 00:23:54.083 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme0n2 : 1.03 8082.80 31.57 0.00 0.00 15803.64 7804.74 31933.91 00:23:54.083 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme0n3 : 1.03 8072.54 31.53 0.00 0.00 15808.82 7804.74 31933.91 00:23:54.083 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme1n1 : 1.03 8062.45 31.49 0.00 0.00 15814.27 7804.74 31933.91 00:23:54.083 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme2n1 : 1.03 13779.34 53.83 0.00 0.00 9240.51 4200.26 20971.52 00:23:54.083 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:54.083 nvme3n1 : 1.03 8051.28 31.45 0.00 0.00 15729.90 6106.76 32887.16 00:23:54.083 [2024-11-20T13:43:46.122Z] =================================================================================================================== 00:23:54.083 [2024-11-20T13:43:46.122Z] Total : 54141.86 211.49 0.00 0.00 14128.80 4200.26 32887.16 00:23:55.460 00:23:55.460 real 0m3.093s 00:23:55.460 user 0m2.305s 00:23:55.460 sys 0m0.603s 00:23:55.460 13:43:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.460 13:43:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 ************************************ 00:23:55.460 END TEST bdev_write_zeroes 00:23:55.460 ************************************ 00:23:55.460 13:43:47 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:55.460 13:43:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:55.460 13:43:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.460 13:43:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 ************************************ 00:23:55.460 START TEST bdev_json_nonenclosed 00:23:55.460 ************************************ 00:23:55.460 13:43:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:55.460 [2024-11-20 13:43:47.313141] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:55.460 [2024-11-20 13:43:47.313530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75090 ] 00:23:55.460 [2024-11-20 13:43:47.488659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.719 [2024-11-20 13:43:47.594953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.719 [2024-11-20 13:43:47.595073] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:55.719 [2024-11-20 13:43:47.595102] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:55.719 [2024-11-20 13:43:47.595117] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:55.977 00:23:55.977 real 0m0.632s 00:23:55.977 user 0m0.414s 00:23:55.977 sys 0m0.112s 00:23:55.977 13:43:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.977 13:43:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:55.977 ************************************ 00:23:55.977 END TEST bdev_json_nonenclosed 00:23:55.977 ************************************ 00:23:55.977 13:43:47 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:55.977 13:43:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:55.977 13:43:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.977 13:43:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:55.977 ************************************ 00:23:55.977 START TEST bdev_json_nonarray 00:23:55.977 ************************************ 00:23:55.977 13:43:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:55.977 [2024-11-20 13:43:48.002624] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:55.977 [2024-11-20 13:43:48.003007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75117 ] 00:23:56.236 [2024-11-20 13:43:48.174928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.495 [2024-11-20 13:43:48.279677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.495 [2024-11-20 13:43:48.279807] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:56.495 [2024-11-20 13:43:48.279837] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:56.495 [2024-11-20 13:43:48.279851] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:56.753 00:23:56.753 real 0m0.644s 00:23:56.753 user 0m0.419s 00:23:56.753 sys 0m0.118s 00:23:56.753 ************************************ 00:23:56.753 END TEST bdev_json_nonarray 00:23:56.753 ************************************ 00:23:56.753 13:43:48 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.753 13:43:48 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:56.753 13:43:48 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:57.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:57.580 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.838 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.838 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.838 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:58.096 00:23:58.096 real 0m59.173s 00:23:58.096 user 1m44.027s 00:23:58.096 sys 0m26.593s 00:23:58.096 13:43:49 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.096 13:43:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:58.096 ************************************ 00:23:58.096 END TEST blockdev_xnvme 00:23:58.096 ************************************ 00:23:58.096 13:43:49 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:58.096 13:43:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.096 13:43:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.096 13:43:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.096 ************************************ 00:23:58.096 START TEST ublk 00:23:58.096 ************************************ 00:23:58.096 13:43:49 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:58.096 * Looking for test storage... 00:23:58.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:58.096 13:43:49 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.096 13:43:49 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.096 13:43:49 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.096 13:43:50 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.096 13:43:50 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.096 13:43:50 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.096 13:43:50 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.096 13:43:50 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.096 13:43:50 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.096 13:43:50 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.096 13:43:50 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.096 13:43:50 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.096 13:43:50 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.096 13:43:50 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.096 13:43:50 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.096 13:43:50 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:58.096 13:43:50 ublk -- scripts/common.sh@345 -- # : 1 00:23:58.096 13:43:50 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.096 13:43:50 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.096 13:43:50 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:58.096 13:43:50 ublk -- scripts/common.sh@353 -- # local d=1 00:23:58.096 13:43:50 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.096 13:43:50 ublk -- scripts/common.sh@355 -- # echo 1 00:23:58.097 13:43:50 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.097 13:43:50 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:58.097 13:43:50 ublk -- scripts/common.sh@353 -- # local d=2 00:23:58.097 13:43:50 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.097 13:43:50 ublk -- scripts/common.sh@355 -- # echo 2 00:23:58.097 13:43:50 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.097 13:43:50 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.097 13:43:50 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.097 13:43:50 ublk -- scripts/common.sh@368 -- # return 0 00:23:58.097 13:43:50 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.097 13:43:50 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.097 --rc genhtml_branch_coverage=1 00:23:58.097 --rc genhtml_function_coverage=1 00:23:58.097 --rc genhtml_legend=1 00:23:58.097 --rc geninfo_all_blocks=1 00:23:58.097 --rc geninfo_unexecuted_blocks=1 00:23:58.097 00:23:58.097 ' 00:23:58.097 13:43:50 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.097 --rc genhtml_branch_coverage=1 00:23:58.097 --rc genhtml_function_coverage=1 00:23:58.097 --rc genhtml_legend=1 00:23:58.097 --rc geninfo_all_blocks=1 00:23:58.097 --rc geninfo_unexecuted_blocks=1 00:23:58.097 00:23:58.097 ' 00:23:58.097 13:43:50 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.097 --rc genhtml_branch_coverage=1 00:23:58.097 --rc genhtml_function_coverage=1 00:23:58.097 --rc genhtml_legend=1 00:23:58.097 --rc geninfo_all_blocks=1 00:23:58.097 --rc geninfo_unexecuted_blocks=1 00:23:58.097 00:23:58.097 ' 00:23:58.097 13:43:50 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.097 --rc genhtml_branch_coverage=1 00:23:58.097 --rc genhtml_function_coverage=1 00:23:58.097 --rc genhtml_legend=1 00:23:58.097 --rc geninfo_all_blocks=1 00:23:58.097 --rc geninfo_unexecuted_blocks=1 00:23:58.097 00:23:58.097 ' 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:58.097 13:43:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:58.097 13:43:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:58.097 13:43:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:58.097 13:43:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:58.097 13:43:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:58.097 13:43:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:58.097 13:43:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:58.097 13:43:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:58.097 13:43:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:58.355 13:43:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:58.355 13:43:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.355 13:43:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.355 13:43:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 ************************************ 00:23:58.355 START TEST test_save_ublk_config 00:23:58.355 ************************************ 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:58.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75401 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75401 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75401 ']' 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.355 13:43:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 [2024-11-20 13:43:50.304697] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:23:58.355 [2024-11-20 13:43:50.304941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75401 ] 00:23:58.614 [2024-11-20 13:43:50.514227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.614 [2024-11-20 13:43:50.620144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:59.549 [2024-11-20 13:43:51.442902] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:59.549 [2024-11-20 13:43:51.444102] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:59.549 malloc0 00:23:59.549 [2024-11-20 13:43:51.523105] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:59.549 [2024-11-20 13:43:51.523251] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:59.549 [2024-11-20 13:43:51.523270] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:59.549 [2024-11-20 13:43:51.523280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:59.549 [2024-11-20 13:43:51.532059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:59.549 [2024-11-20 13:43:51.532135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:59.549 [2024-11-20 13:43:51.538944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:59.549 [2024-11-20 13:43:51.539106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:59.549 [2024-11-20 13:43:51.555921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:59.549 0 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.549 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:00.116 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.116 13:43:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:24:00.116 "subsystems": [ 00:24:00.116 { 00:24:00.116 "subsystem": "fsdev", 00:24:00.116 "config": [ 00:24:00.116 { 00:24:00.117 "method": "fsdev_set_opts", 00:24:00.117 "params": { 00:24:00.117 "fsdev_io_pool_size": 65535, 00:24:00.117 "fsdev_io_cache_size": 256 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "keyring", 00:24:00.117 "config": [] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "iobuf", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "iobuf_set_options", 00:24:00.117 "params": { 00:24:00.117 "small_pool_count": 8192, 00:24:00.117 "large_pool_count": 1024, 00:24:00.117 "small_bufsize": 8192, 00:24:00.117 "large_bufsize": 135168, 00:24:00.117 "enable_numa": false 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "sock", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "sock_set_default_impl", 00:24:00.117 "params": { 00:24:00.117 "impl_name": "posix" 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "sock_impl_set_options", 00:24:00.117 "params": { 00:24:00.117 "impl_name": "ssl", 00:24:00.117 "recv_buf_size": 4096, 00:24:00.117 "send_buf_size": 4096, 00:24:00.117 "enable_recv_pipe": true, 00:24:00.117 "enable_quickack": false, 00:24:00.117 "enable_placement_id": 0, 00:24:00.117 "enable_zerocopy_send_server": true, 00:24:00.117 "enable_zerocopy_send_client": false, 00:24:00.117 "zerocopy_threshold": 0, 00:24:00.117 "tls_version": 0, 00:24:00.117 "enable_ktls": false 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "sock_impl_set_options", 00:24:00.117 "params": { 00:24:00.117 "impl_name": "posix", 00:24:00.117 "recv_buf_size": 2097152, 00:24:00.117 "send_buf_size": 2097152, 00:24:00.117 "enable_recv_pipe": true, 00:24:00.117 "enable_quickack": false, 00:24:00.117 "enable_placement_id": 0, 00:24:00.117 "enable_zerocopy_send_server": true, 00:24:00.117 "enable_zerocopy_send_client": false, 00:24:00.117 "zerocopy_threshold": 0, 00:24:00.117 "tls_version": 0, 00:24:00.117 "enable_ktls": false 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "vmd", 00:24:00.117 "config": [] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "accel", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "accel_set_options", 00:24:00.117 "params": { 00:24:00.117 "small_cache_size": 128, 00:24:00.117 "large_cache_size": 16, 00:24:00.117 "task_count": 2048, 00:24:00.117 "sequence_count": 2048, 00:24:00.117 "buf_count": 2048 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "bdev", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "bdev_set_options", 00:24:00.117 "params": { 00:24:00.117 "bdev_io_pool_size": 65535, 00:24:00.117 "bdev_io_cache_size": 256, 00:24:00.117 "bdev_auto_examine": true, 00:24:00.117 "iobuf_small_cache_size": 128, 00:24:00.117 "iobuf_large_cache_size": 16 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_raid_set_options", 00:24:00.117 "params": { 00:24:00.117 "process_window_size_kb": 1024, 00:24:00.117 "process_max_bandwidth_mb_sec": 0 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_iscsi_set_options", 00:24:00.117 "params": { 00:24:00.117 "timeout_sec": 30 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_nvme_set_options", 00:24:00.117 "params": { 00:24:00.117 "action_on_timeout": "none", 00:24:00.117 "timeout_us": 0, 00:24:00.117 "timeout_admin_us": 0, 00:24:00.117 "keep_alive_timeout_ms": 10000, 00:24:00.117 "arbitration_burst": 0, 00:24:00.117 "low_priority_weight": 0, 00:24:00.117 "medium_priority_weight": 0, 00:24:00.117 "high_priority_weight": 0, 00:24:00.117 "nvme_adminq_poll_period_us": 10000, 00:24:00.117 "nvme_ioq_poll_period_us": 0, 00:24:00.117 "io_queue_requests": 0, 00:24:00.117 "delay_cmd_submit": true, 00:24:00.117 "transport_retry_count": 4, 00:24:00.117 "bdev_retry_count": 3, 00:24:00.117 "transport_ack_timeout": 0, 00:24:00.117 "ctrlr_loss_timeout_sec": 0, 00:24:00.117 "reconnect_delay_sec": 0, 00:24:00.117 "fast_io_fail_timeout_sec": 0, 00:24:00.117 "disable_auto_failback": false, 00:24:00.117 "generate_uuids": false, 00:24:00.117 "transport_tos": 0, 00:24:00.117 "nvme_error_stat": false, 00:24:00.117 "rdma_srq_size": 0, 00:24:00.117 "io_path_stat": false, 00:24:00.117 "allow_accel_sequence": false, 00:24:00.117 "rdma_max_cq_size": 0, 00:24:00.117 "rdma_cm_event_timeout_ms": 0, 00:24:00.117 "dhchap_digests": [ 00:24:00.117 "sha256", 00:24:00.117 "sha384", 00:24:00.117 "sha512" 00:24:00.117 ], 00:24:00.117 "dhchap_dhgroups": [ 00:24:00.117 "null", 00:24:00.117 "ffdhe2048", 00:24:00.117 "ffdhe3072", 00:24:00.117 "ffdhe4096", 00:24:00.117 "ffdhe6144", 00:24:00.117 "ffdhe8192" 00:24:00.117 ] 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_nvme_set_hotplug", 00:24:00.117 "params": { 00:24:00.117 "period_us": 100000, 00:24:00.117 "enable": false 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_malloc_create", 00:24:00.117 "params": { 00:24:00.117 "name": "malloc0", 00:24:00.117 "num_blocks": 8192, 00:24:00.117 "block_size": 4096, 00:24:00.117 "physical_block_size": 4096, 00:24:00.117 "uuid": "3a52134c-02fe-4233-9466-a0e9c12452f9", 00:24:00.117 "optimal_io_boundary": 0, 00:24:00.117 "md_size": 0, 00:24:00.117 "dif_type": 0, 00:24:00.117 "dif_is_head_of_md": false, 00:24:00.117 "dif_pi_format": 0 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "bdev_wait_for_examine" 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "scsi", 00:24:00.117 "config": null 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "scheduler", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "framework_set_scheduler", 00:24:00.117 "params": { 00:24:00.117 "name": "static" 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "vhost_scsi", 00:24:00.117 "config": [] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "vhost_blk", 00:24:00.117 "config": [] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "ublk", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "ublk_create_target", 00:24:00.117 "params": { 00:24:00.117 "cpumask": "1" 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "ublk_start_disk", 00:24:00.117 "params": { 00:24:00.117 "bdev_name": "malloc0", 00:24:00.117 "ublk_id": 0, 00:24:00.117 "num_queues": 1, 00:24:00.117 "queue_depth": 128 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "nbd", 00:24:00.117 "config": [] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "nvmf", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "nvmf_set_config", 00:24:00.117 "params": { 00:24:00.117 "discovery_filter": "match_any", 00:24:00.117 "admin_cmd_passthru": { 00:24:00.117 "identify_ctrlr": false 00:24:00.117 }, 00:24:00.117 "dhchap_digests": [ 00:24:00.117 "sha256", 00:24:00.117 "sha384", 00:24:00.117 "sha512" 00:24:00.117 ], 00:24:00.117 "dhchap_dhgroups": [ 00:24:00.117 "null", 00:24:00.117 "ffdhe2048", 00:24:00.117 "ffdhe3072", 00:24:00.117 "ffdhe4096", 00:24:00.117 "ffdhe6144", 00:24:00.117 "ffdhe8192" 00:24:00.117 ] 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "nvmf_set_max_subsystems", 00:24:00.117 "params": { 00:24:00.117 "max_subsystems": 1024 00:24:00.117 } 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "method": "nvmf_set_crdt", 00:24:00.117 "params": { 00:24:00.117 "crdt1": 0, 00:24:00.117 "crdt2": 0, 00:24:00.117 "crdt3": 0 00:24:00.117 } 00:24:00.117 } 00:24:00.117 ] 00:24:00.117 }, 00:24:00.117 { 00:24:00.117 "subsystem": "iscsi", 00:24:00.117 "config": [ 00:24:00.117 { 00:24:00.117 "method": "iscsi_set_options", 00:24:00.117 "params": { 00:24:00.117 "node_base": "iqn.2016-06.io.spdk", 00:24:00.117 "max_sessions": 128, 00:24:00.117 "max_connections_per_session": 2, 00:24:00.117 "max_queue_depth": 64, 00:24:00.117 "default_time2wait": 2, 00:24:00.117 "default_time2retain": 20, 00:24:00.117 "first_burst_length": 8192, 00:24:00.117 "immediate_data": true, 00:24:00.117 "allow_duplicated_isid": false, 00:24:00.117 "error_recovery_level": 0, 00:24:00.117 "nop_timeout": 60, 00:24:00.117 "nop_in_interval": 30, 00:24:00.117 "disable_chap": false, 00:24:00.118 "require_chap": false, 00:24:00.118 "mutual_chap": false, 00:24:00.118 "chap_group": 0, 00:24:00.118 "max_large_datain_per_connection": 64, 00:24:00.118 "max_r2t_per_connection": 4, 00:24:00.118 "pdu_pool_size": 36864, 00:24:00.118 "immediate_data_pool_size": 16384, 00:24:00.118 "data_out_pool_size": 2048 00:24:00.118 } 00:24:00.118 } 00:24:00.118 ] 00:24:00.118 } 00:24:00.118 ] 00:24:00.118 }' 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75401 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75401 ']' 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75401 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75401 00:24:00.118 killing process with pid 75401 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75401' 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75401 00:24:00.118 13:43:51 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75401 00:24:02.019 [2024-11-20 13:43:53.563092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:02.019 [2024-11-20 13:43:53.606951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:02.019 [2024-11-20 13:43:53.607176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:02.019 [2024-11-20 13:43:53.617110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:02.019 [2024-11-20 13:43:53.617269] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:02.019 [2024-11-20 13:43:53.617304] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:02.019 [2024-11-20 13:43:53.617358] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:02.019 [2024-11-20 13:43:53.617622] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75469 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75469 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75469 ']' 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.395 13:43:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:24:03.395 "subsystems": [ 00:24:03.395 { 00:24:03.395 "subsystem": "fsdev", 00:24:03.395 "config": [ 00:24:03.395 { 00:24:03.395 "method": "fsdev_set_opts", 00:24:03.395 "params": { 00:24:03.395 "fsdev_io_pool_size": 65535, 00:24:03.395 "fsdev_io_cache_size": 256 00:24:03.395 } 00:24:03.395 } 00:24:03.395 ] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "keyring", 00:24:03.395 "config": [] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "iobuf", 00:24:03.395 "config": [ 00:24:03.395 { 00:24:03.395 "method": "iobuf_set_options", 00:24:03.395 "params": { 00:24:03.395 "small_pool_count": 8192, 00:24:03.395 "large_pool_count": 1024, 00:24:03.395 "small_bufsize": 8192, 00:24:03.395 "large_bufsize": 135168, 00:24:03.395 "enable_numa": false 00:24:03.395 } 00:24:03.395 } 00:24:03.395 ] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "sock", 00:24:03.395 "config": [ 00:24:03.395 { 00:24:03.395 "method": "sock_set_default_impl", 00:24:03.395 "params": { 00:24:03.395 "impl_name": "posix" 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "sock_impl_set_options", 00:24:03.395 "params": { 00:24:03.395 "impl_name": "ssl", 00:24:03.395 "recv_buf_size": 4096, 00:24:03.395 "send_buf_size": 4096, 00:24:03.395 "enable_recv_pipe": true, 00:24:03.395 "enable_quickack": false, 00:24:03.395 "enable_placement_id": 0, 00:24:03.395 "enable_zerocopy_send_server": true, 00:24:03.395 "enable_zerocopy_send_client": false, 00:24:03.395 "zerocopy_threshold": 0, 00:24:03.395 "tls_version": 0, 00:24:03.395 "enable_ktls": false 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "sock_impl_set_options", 00:24:03.395 "params": { 00:24:03.395 "impl_name": "posix", 00:24:03.395 "recv_buf_size": 2097152, 00:24:03.395 "send_buf_size": 2097152, 00:24:03.395 "enable_recv_pipe": true, 00:24:03.395 "enable_quickack": false, 00:24:03.395 "enable_placement_id": 0, 00:24:03.395 "enable_zerocopy_send_server": true, 00:24:03.395 "enable_zerocopy_send_client": false, 00:24:03.395 "zerocopy_threshold": 0, 00:24:03.395 "tls_version": 0, 00:24:03.395 "enable_ktls": false 00:24:03.395 } 00:24:03.395 } 00:24:03.395 ] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "vmd", 00:24:03.395 "config": [] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "accel", 00:24:03.395 "config": [ 00:24:03.395 { 00:24:03.395 "method": "accel_set_options", 00:24:03.395 "params": { 00:24:03.395 "small_cache_size": 128, 00:24:03.395 "large_cache_size": 16, 00:24:03.395 "task_count": 2048, 00:24:03.395 "sequence_count": 2048, 00:24:03.395 "buf_count": 2048 00:24:03.395 } 00:24:03.395 } 00:24:03.395 ] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "bdev", 00:24:03.395 "config": [ 00:24:03.395 { 00:24:03.395 "method": "bdev_set_options", 00:24:03.395 "params": { 00:24:03.395 "bdev_io_pool_size": 65535, 00:24:03.395 "bdev_io_cache_size": 256, 00:24:03.395 "bdev_auto_examine": true, 00:24:03.395 "iobuf_small_cache_size": 128, 00:24:03.395 "iobuf_large_cache_size": 16 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_raid_set_options", 00:24:03.395 "params": { 00:24:03.395 "process_window_size_kb": 1024, 00:24:03.395 "process_max_bandwidth_mb_sec": 0 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_iscsi_set_options", 00:24:03.395 "params": { 00:24:03.395 "timeout_sec": 30 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_nvme_set_options", 00:24:03.395 "params": { 00:24:03.395 "action_on_timeout": "none", 00:24:03.395 "timeout_us": 0, 00:24:03.395 "timeout_admin_us": 0, 00:24:03.395 "keep_alive_timeout_ms": 10000, 00:24:03.395 "arbitration_burst": 0, 00:24:03.395 "low_priority_weight": 0, 00:24:03.395 "medium_priority_weight": 0, 00:24:03.395 "high_priority_weight": 0, 00:24:03.395 "nvme_adminq_poll_period_us": 10000, 00:24:03.395 "nvme_ioq_poll_period_us": 0, 00:24:03.395 "io_queue_requests": 0, 00:24:03.395 "delay_cmd_submit": true, 00:24:03.395 "transport_retry_count": 4, 00:24:03.395 "bdev_retry_count": 3, 00:24:03.395 "transport_ack_timeout": 0, 00:24:03.395 "ctrlr_loss_timeout_sec": 0, 00:24:03.395 "reconnect_delay_sec": 0, 00:24:03.395 "fast_io_fail_timeout_sec": 0, 00:24:03.395 "disable_auto_failback": false, 00:24:03.395 "generate_uuids": false, 00:24:03.395 "transport_tos": 0, 00:24:03.395 "nvme_error_stat": false, 00:24:03.395 "rdma_srq_size": 0, 00:24:03.395 "io_path_stat": false, 00:24:03.395 "allow_accel_sequence": false, 00:24:03.395 "rdma_max_cq_size": 0, 00:24:03.395 "rdma_cm_event_timeout_ms": 0, 00:24:03.395 "dhchap_digests": [ 00:24:03.395 "sha256", 00:24:03.395 "sha384", 00:24:03.395 "sha512" 00:24:03.395 ], 00:24:03.395 "dhchap_dhgroups": [ 00:24:03.395 "null", 00:24:03.395 "ffdhe2048", 00:24:03.395 "ffdhe3072", 00:24:03.395 "ffdhe4096", 00:24:03.395 "ffdhe6144", 00:24:03.395 "ffdhe8192" 00:24:03.395 ] 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_nvme_set_hotplug", 00:24:03.395 "params": { 00:24:03.395 "period_us": 100000, 00:24:03.395 "enable": false 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_malloc_create", 00:24:03.395 "params": { 00:24:03.395 "name": "malloc0", 00:24:03.395 "num_blocks": 8192, 00:24:03.395 "block_size": 4096, 00:24:03.395 "physical_block_size": 4096, 00:24:03.395 "uuid": "3a52134c-02fe-4233-9466-a0e9c12452f9", 00:24:03.395 "optimal_io_boundary": 0, 00:24:03.395 "md_size": 0, 00:24:03.395 "dif_type": 0, 00:24:03.395 "dif_is_head_of_md": false, 00:24:03.395 "dif_pi_format": 0 00:24:03.395 } 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "method": "bdev_wait_for_examine" 00:24:03.395 } 00:24:03.395 ] 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "scsi", 00:24:03.395 "config": null 00:24:03.395 }, 00:24:03.395 { 00:24:03.395 "subsystem": "scheduler", 00:24:03.396 "config": [ 00:24:03.396 { 00:24:03.396 "method": "framework_set_scheduler", 00:24:03.396 "params": { 00:24:03.396 "name": "static" 00:24:03.396 } 00:24:03.396 } 00:24:03.396 ] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "vhost_scsi", 00:24:03.396 "config": [] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "vhost_blk", 00:24:03.396 "config": [] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "ublk", 00:24:03.396 "config": [ 00:24:03.396 { 00:24:03.396 "method": "ublk_create_target", 00:24:03.396 "params": { 00:24:03.396 "cpumask": "1" 00:24:03.396 } 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "method": "ublk_start_disk", 00:24:03.396 "params": { 00:24:03.396 "bdev_name": "malloc0", 00:24:03.396 "ublk_id": 0, 00:24:03.396 "num_queues": 1, 00:24:03.396 "queue_depth": 128 00:24:03.396 } 00:24:03.396 } 00:24:03.396 ] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "nbd", 00:24:03.396 "config": [] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "nvmf", 00:24:03.396 "config": [ 00:24:03.396 { 00:24:03.396 "method": "nvmf_set_config", 00:24:03.396 "params": { 00:24:03.396 "discovery_filter": "match_any", 00:24:03.396 "admin_cmd_passthru": { 00:24:03.396 "identify_ctrlr": false 00:24:03.396 }, 00:24:03.396 "dhchap_digests": [ 00:24:03.396 "sha256", 00:24:03.396 "sha384", 00:24:03.396 "sha512" 00:24:03.396 ], 00:24:03.396 "dhchap_dhgroups": [ 00:24:03.396 "null", 00:24:03.396 "ffdhe2048", 00:24:03.396 "ffdhe3072", 00:24:03.396 "ffdhe4096", 00:24:03.396 "ffdhe6144", 00:24:03.396 "ffdhe8192" 00:24:03.396 ] 00:24:03.396 } 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "method": "nvmf_set_max_subsystems", 00:24:03.396 "params": { 00:24:03.396 "max_subsystems": 1024 00:24:03.396 } 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "method": "nvmf_set_crdt", 00:24:03.396 "params": { 00:24:03.396 "crdt1": 0, 00:24:03.396 "crdt2": 0, 00:24:03.396 "crdt3": 0 00:24:03.396 } 00:24:03.396 } 00:24:03.396 ] 00:24:03.396 }, 00:24:03.396 { 00:24:03.396 "subsystem": "iscsi", 00:24:03.396 "config": [ 00:24:03.396 { 00:24:03.396 "method": "iscsi_set_options", 00:24:03.396 "params": { 00:24:03.396 "node_base": "iqn.2016-06.io.spdk", 00:24:03.396 "max_sessions": 128, 00:24:03.396 "max_connections_per_session": 2, 00:24:03.396 "max_queue_depth": 64, 00:24:03.396 "default_time2wait": 2, 00:24:03.396 "default_time2retain": 20, 00:24:03.396 "first_burst_length": 8192, 00:24:03.396 "immediate_data": true, 00:24:03.396 "allow_duplicated_isid": false, 00:24:03.396 "error_recovery_level": 0, 00:24:03.396 "nop_timeout": 60, 00:24:03.396 "nop_in_interval": 30, 00:24:03.396 "disable_chap": false, 00:24:03.396 "require_chap": false, 00:24:03.396 "mutual_chap": false, 00:24:03.396 "chap_group": 0, 00:24:03.396 "max_large_datain_per_connection": 64, 00:24:03.396 "max_r2t_per_connection": 4, 00:24:03.396 "pdu_pool_size": 36864, 00:24:03.396 "immediate_data_pool_size": 16384, 00:24:03.396 "data_out_pool_size": 2048 00:24:03.396 } 00:24:03.396 } 00:24:03.396 ] 00:24:03.396 } 00:24:03.396 ] 00:24:03.396 }' 00:24:03.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.396 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.396 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.396 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.396 13:43:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:03.654 [2024-11-20 13:43:55.483343] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:24:03.654 [2024-11-20 13:43:55.483524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75469 ] 00:24:03.654 [2024-11-20 13:43:55.664661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.913 [2024-11-20 13:43:55.767926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.848 [2024-11-20 13:43:56.691893] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:04.848 [2024-11-20 13:43:56.693400] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:04.848 [2024-11-20 13:43:56.700045] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:04.848 [2024-11-20 13:43:56.700160] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:04.848 [2024-11-20 13:43:56.700180] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:04.848 [2024-11-20 13:43:56.700190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:04.848 [2024-11-20 13:43:56.707914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:04.848 [2024-11-20 13:43:56.707947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:04.848 [2024-11-20 13:43:56.715926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:04.848 [2024-11-20 13:43:56.716060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:04.848 [2024-11-20 13:43:56.732907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75469 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75469 ']' 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75469 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75469 00:24:04.848 killing process with pid 75469 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75469' 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75469 00:24:04.848 13:43:56 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75469 00:24:06.750 [2024-11-20 13:43:58.262695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:06.750 [2024-11-20 13:43:58.294000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:06.750 [2024-11-20 13:43:58.294171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:06.750 [2024-11-20 13:43:58.300922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:06.750 [2024-11-20 13:43:58.300988] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:06.750 [2024-11-20 13:43:58.301001] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:06.750 [2024-11-20 13:43:58.301038] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:06.750 [2024-11-20 13:43:58.301218] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:08.127 13:44:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:08.127 00:24:08.127 real 0m9.909s 00:24:08.127 user 0m7.822s 00:24:08.127 sys 0m3.316s 00:24:08.127 13:44:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.127 13:44:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:08.127 ************************************ 00:24:08.127 END TEST test_save_ublk_config 00:24:08.127 ************************************ 00:24:08.127 13:44:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75555 00:24:08.127 13:44:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:08.127 13:44:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.127 13:44:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75555 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@835 -- # '[' -z 75555 ']' 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.127 13:44:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:08.385 [2024-11-20 13:44:00.249993] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:24:08.385 [2024-11-20 13:44:00.250369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75555 ] 00:24:08.644 [2024-11-20 13:44:00.427152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.644 [2024-11-20 13:44:00.532845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.644 [2024-11-20 13:44:00.532852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.578 13:44:01 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.578 13:44:01 ublk -- common/autotest_common.sh@868 -- # return 0 00:24:09.578 13:44:01 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:09.578 13:44:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:09.578 13:44:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.578 13:44:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:09.578 ************************************ 00:24:09.578 START TEST test_create_ublk 00:24:09.578 ************************************ 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:24:09.578 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:09.578 [2024-11-20 13:44:01.420920] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:09.578 [2024-11-20 13:44:01.424462] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.578 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:09.578 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.578 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:09.836 [2024-11-20 13:44:01.717136] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:09.836 [2024-11-20 13:44:01.717660] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:09.836 [2024-11-20 13:44:01.717690] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:09.836 [2024-11-20 13:44:01.717702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:09.836 [2024-11-20 13:44:01.726118] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:09.836 [2024-11-20 13:44:01.726153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:09.836 [2024-11-20 13:44:01.732921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:09.836 [2024-11-20 13:44:01.748018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:09.836 [2024-11-20 13:44:01.770932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:09.836 13:44:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:09.836 { 00:24:09.836 "ublk_device": "/dev/ublkb0", 00:24:09.836 "id": 0, 00:24:09.836 "queue_depth": 512, 00:24:09.836 "num_queues": 4, 00:24:09.836 "bdev_name": "Malloc0" 00:24:09.836 } 00:24:09.836 ]' 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:09.836 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:10.093 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:10.093 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:10.093 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:10.093 13:44:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:10.093 13:44:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:10.093 13:44:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:10.093 13:44:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:10.093 13:44:02 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:10.093 13:44:02 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:10.351 fio: verification read phase will never start because write phase uses all of runtime 00:24:10.351 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:10.351 fio-3.35 00:24:10.351 Starting 1 process 00:24:20.325 00:24:20.325 fio_test: (groupid=0, jobs=1): err= 0: pid=75607: Wed Nov 20 13:44:12 2024 00:24:20.325 write: IOPS=9875, BW=38.6MiB/s (40.4MB/s)(386MiB/10001msec); 0 zone resets 00:24:20.325 clat (usec): min=62, max=12192, avg=99.50, stdev=199.13 00:24:20.325 lat (usec): min=63, max=12194, avg=100.44, stdev=199.17 00:24:20.325 clat percentiles (usec): 00:24:20.325 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 81], 00:24:20.325 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:24:20.325 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 109], 00:24:20.325 | 99.00th=[ 137], 99.50th=[ 355], 99.90th=[ 3589], 99.95th=[ 3818], 00:24:20.325 | 99.99th=[ 4293] 00:24:20.325 bw ( KiB/s): min=17096, max=43112, per=99.67%, avg=39373.47, stdev=7019.38, samples=19 00:24:20.325 iops : min= 4274, max=10778, avg=9843.37, stdev=1754.85, samples=19 00:24:20.325 lat (usec) : 100=91.02%, 250=8.47%, 500=0.02%, 750=0.02%, 1000=0.04% 00:24:20.325 lat (msec) : 2=0.12%, 4=0.28%, 10=0.03%, 20=0.01% 00:24:20.325 cpu : usr=3.60%, sys=8.24%, ctx=98765, majf=0, minf=796 00:24:20.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.325 issued rwts: total=0,98764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:20.325 00:24:20.325 Run status group 0 (all jobs): 00:24:20.325 WRITE: bw=38.6MiB/s (40.4MB/s), 38.6MiB/s-38.6MiB/s (40.4MB/s-40.4MB/s), io=386MiB (405MB), run=10001-10001msec 00:24:20.325 00:24:20.325 Disk stats (read/write): 00:24:20.325 ublkb0: ios=0/97656, merge=0/0, ticks=0/8786, in_queue=8786, util=99.09% 00:24:20.325 13:44:12 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:20.325 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.325 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.325 [2024-11-20 13:44:12.322579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:20.584 [2024-11-20 13:44:12.364297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:20.584 [2024-11-20 13:44:12.365480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:20.584 [2024-11-20 13:44:12.373924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:20.584 [2024-11-20 13:44:12.374267] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:20.584 [2024-11-20 13:44:12.374296] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.584 13:44:12 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.584 [2024-11-20 13:44:12.390014] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:20.584 request: 00:24:20.584 { 00:24:20.584 "ublk_id": 0, 00:24:20.584 "method": "ublk_stop_disk", 00:24:20.584 "req_id": 1 00:24:20.584 } 00:24:20.584 Got JSON-RPC error response 00:24:20.584 response: 00:24:20.584 { 00:24:20.584 "code": -19, 00:24:20.584 "message": "No such device" 00:24:20.584 } 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:20.584 13:44:12 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.584 [2024-11-20 13:44:12.406009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:20.584 [2024-11-20 13:44:12.413889] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:20.584 [2024-11-20 13:44:12.413947] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.584 13:44:12 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.584 13:44:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.152 13:44:13 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:21.152 ************************************ 00:24:21.152 END TEST test_create_ublk 00:24:21.152 ************************************ 00:24:21.152 13:44:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:21.152 00:24:21.152 real 0m11.747s 00:24:21.152 user 0m0.833s 00:24:21.152 sys 0m0.926s 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.152 13:44:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.410 13:44:13 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:21.410 13:44:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:21.410 13:44:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.410 13:44:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.410 ************************************ 00:24:21.410 START TEST test_create_multi_ublk 00:24:21.410 ************************************ 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.410 [2024-11-20 13:44:13.219892] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:21.410 [2024-11-20 13:44:13.222199] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.410 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.669 [2024-11-20 13:44:13.509075] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:21.669 [2024-11-20 13:44:13.509565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:21.669 [2024-11-20 13:44:13.509589] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:21.669 [2024-11-20 13:44:13.509606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:21.669 [2024-11-20 13:44:13.517136] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:21.669 [2024-11-20 13:44:13.517172] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:21.669 [2024-11-20 13:44:13.523916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:21.669 [2024-11-20 13:44:13.524646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:21.669 [2024-11-20 13:44:13.543904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.669 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.927 [2024-11-20 13:44:13.804143] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:21.927 [2024-11-20 13:44:13.804630] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:21.927 [2024-11-20 13:44:13.804659] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:21.927 [2024-11-20 13:44:13.804670] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:21.927 [2024-11-20 13:44:13.811929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:21.927 [2024-11-20 13:44:13.811961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:21.927 [2024-11-20 13:44:13.819935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:21.927 [2024-11-20 13:44:13.820692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:21.927 [2024-11-20 13:44:13.836894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.927 13:44:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.186 [2024-11-20 13:44:14.092055] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:22.186 [2024-11-20 13:44:14.092563] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:22.186 [2024-11-20 13:44:14.092588] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:22.186 [2024-11-20 13:44:14.092601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:22.186 [2024-11-20 13:44:14.099936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:22.186 [2024-11-20 13:44:14.099975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:22.186 [2024-11-20 13:44:14.107913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:22.186 [2024-11-20 13:44:14.108650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:22.186 [2024-11-20 13:44:14.112131] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.186 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.444 [2024-11-20 13:44:14.365065] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:22.444 [2024-11-20 13:44:14.365595] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:22.444 [2024-11-20 13:44:14.365616] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:22.444 [2024-11-20 13:44:14.365625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:22.444 [2024-11-20 13:44:14.368942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:22.444 [2024-11-20 13:44:14.368973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:22.444 [2024-11-20 13:44:14.378894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:22.444 [2024-11-20 13:44:14.379632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:22.444 [2024-11-20 13:44:14.391153] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:22.444 { 00:24:22.444 "ublk_device": "/dev/ublkb0", 00:24:22.444 "id": 0, 00:24:22.444 "queue_depth": 512, 00:24:22.444 "num_queues": 4, 00:24:22.444 "bdev_name": "Malloc0" 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "ublk_device": "/dev/ublkb1", 00:24:22.444 "id": 1, 00:24:22.444 "queue_depth": 512, 00:24:22.444 "num_queues": 4, 00:24:22.444 "bdev_name": "Malloc1" 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "ublk_device": "/dev/ublkb2", 00:24:22.444 "id": 2, 00:24:22.444 "queue_depth": 512, 00:24:22.444 "num_queues": 4, 00:24:22.444 "bdev_name": "Malloc2" 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "ublk_device": "/dev/ublkb3", 00:24:22.444 "id": 3, 00:24:22.444 "queue_depth": 512, 00:24:22.444 "num_queues": 4, 00:24:22.444 "bdev_name": "Malloc3" 00:24:22.444 } 00:24:22.444 ]' 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:22.444 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:22.702 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:22.973 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:22.973 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:22.974 13:44:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:23.232 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.490 [2024-11-20 13:44:15.436106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:23.490 [2024-11-20 13:44:15.479366] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:23.490 [2024-11-20 13:44:15.480601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:23.490 [2024-11-20 13:44:15.489934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:23.490 [2024-11-20 13:44:15.490401] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:23.490 [2024-11-20 13:44:15.493889] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.490 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.490 [2024-11-20 13:44:15.498079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:23.748 [2024-11-20 13:44:15.530367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:23.748 [2024-11-20 13:44:15.531547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:23.748 [2024-11-20 13:44:15.539925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:23.748 [2024-11-20 13:44:15.540268] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:23.748 [2024-11-20 13:44:15.540295] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.748 [2024-11-20 13:44:15.556075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:23.748 [2024-11-20 13:44:15.598949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:23.748 [2024-11-20 13:44:15.599935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:23.748 [2024-11-20 13:44:15.606927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:23.748 [2024-11-20 13:44:15.607283] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:23.748 [2024-11-20 13:44:15.607310] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:23.748 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.749 [2024-11-20 13:44:15.623025] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:23.749 [2024-11-20 13:44:15.665357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:23.749 [2024-11-20 13:44:15.666468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:23.749 [2024-11-20 13:44:15.670909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:23.749 [2024-11-20 13:44:15.671263] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:23.749 [2024-11-20 13:44:15.671290] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.749 13:44:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:24.007 [2024-11-20 13:44:15.977029] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:24.007 [2024-11-20 13:44:15.983919] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:24.007 [2024-11-20 13:44:15.983968] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:24.007 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:24.007 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.007 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:24.007 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.007 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.941 13:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.506 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.506 13:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.506 13:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:25.506 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.506 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:25.764 ************************************ 00:24:25.764 END TEST test_create_multi_ublk 00:24:25.764 ************************************ 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:25.764 00:24:25.764 real 0m4.507s 00:24:25.764 user 0m1.352s 00:24:25.764 sys 0m0.152s 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.764 13:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.764 13:44:17 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:25.764 13:44:17 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:25.764 13:44:17 ublk -- ublk/ublk.sh@130 -- # killprocess 75555 00:24:25.764 13:44:17 ublk -- common/autotest_common.sh@954 -- # '[' -z 75555 ']' 00:24:25.764 13:44:17 ublk -- common/autotest_common.sh@958 -- # kill -0 75555 00:24:25.764 13:44:17 ublk -- common/autotest_common.sh@959 -- # uname 00:24:25.764 13:44:17 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75555 00:24:25.765 killing process with pid 75555 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75555' 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@973 -- # kill 75555 00:24:25.765 13:44:17 ublk -- common/autotest_common.sh@978 -- # wait 75555 00:24:27.141 [2024-11-20 13:44:18.758632] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:27.141 [2024-11-20 13:44:18.758706] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:28.075 00:24:28.075 real 0m29.932s 00:24:28.075 user 0m43.767s 00:24:28.075 sys 0m10.103s 00:24:28.075 13:44:19 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.075 13:44:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:28.075 ************************************ 00:24:28.075 END TEST ublk 00:24:28.075 ************************************ 00:24:28.075 13:44:19 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:28.075 13:44:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:28.075 13:44:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.075 13:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:28.075 ************************************ 00:24:28.075 START TEST ublk_recovery 00:24:28.075 ************************************ 00:24:28.075 13:44:19 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:28.075 * Looking for test storage... 00:24:28.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:28.075 13:44:19 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.075 13:44:19 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.075 13:44:19 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.075 13:44:20 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.075 13:44:20 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:28.075 13:44:20 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.075 13:44:20 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.075 --rc genhtml_branch_coverage=1 00:24:28.075 --rc genhtml_function_coverage=1 00:24:28.075 --rc genhtml_legend=1 00:24:28.075 --rc geninfo_all_blocks=1 00:24:28.075 --rc geninfo_unexecuted_blocks=1 00:24:28.075 00:24:28.075 ' 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.076 --rc genhtml_branch_coverage=1 00:24:28.076 --rc genhtml_function_coverage=1 00:24:28.076 --rc genhtml_legend=1 00:24:28.076 --rc geninfo_all_blocks=1 00:24:28.076 --rc geninfo_unexecuted_blocks=1 00:24:28.076 00:24:28.076 ' 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.076 --rc genhtml_branch_coverage=1 00:24:28.076 --rc genhtml_function_coverage=1 00:24:28.076 --rc genhtml_legend=1 00:24:28.076 --rc geninfo_all_blocks=1 00:24:28.076 --rc geninfo_unexecuted_blocks=1 00:24:28.076 00:24:28.076 ' 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.076 --rc genhtml_branch_coverage=1 00:24:28.076 --rc genhtml_function_coverage=1 00:24:28.076 --rc genhtml_legend=1 00:24:28.076 --rc geninfo_all_blocks=1 00:24:28.076 --rc geninfo_unexecuted_blocks=1 00:24:28.076 00:24:28.076 ' 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:28.076 13:44:20 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75964 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.076 13:44:20 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75964 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75964 ']' 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.076 13:44:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.353 [2024-11-20 13:44:20.208169] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:24:28.353 [2024-11-20 13:44:20.208718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75964 ] 00:24:28.646 [2024-11-20 13:44:20.384920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:28.646 [2024-11-20 13:44:20.492270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.646 [2024-11-20 13:44:20.492272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:29.581 13:44:21 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 [2024-11-20 13:44:21.263909] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:29.581 [2024-11-20 13:44:21.266823] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 13:44:21 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 malloc0 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 13:44:21 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 [2024-11-20 13:44:21.408137] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:29.581 [2024-11-20 13:44:21.408294] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:29.581 [2024-11-20 13:44:21.408315] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:29.581 [2024-11-20 13:44:21.408328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:29.581 [2024-11-20 13:44:21.416116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:29.581 [2024-11-20 13:44:21.416162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:29.581 [2024-11-20 13:44:21.423922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:29.581 [2024-11-20 13:44:21.424130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:29.581 [2024-11-20 13:44:21.446938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:29.581 1 00:24:29.581 13:44:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 13:44:21 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:30.517 13:44:22 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76005 00:24:30.517 13:44:22 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:30.517 13:44:22 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:30.776 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:30.776 fio-3.35 00:24:30.776 Starting 1 process 00:24:36.040 13:44:27 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75964 00:24:36.040 13:44:27 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:41.305 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75964 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:41.305 13:44:32 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76115 00:24:41.305 13:44:32 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:41.305 13:44:32 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.305 13:44:32 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76115 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76115 ']' 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.305 13:44:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.305 [2024-11-20 13:44:32.591131] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:24:41.305 [2024-11-20 13:44:32.591305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76115 ] 00:24:41.305 [2024-11-20 13:44:32.779905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:41.305 [2024-11-20 13:44:32.908437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.305 [2024-11-20 13:44:32.908447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.923 13:44:33 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.923 13:44:33 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:41.923 13:44:33 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:41.923 13:44:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.923 13:44:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.923 [2024-11-20 13:44:33.751910] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:41.924 [2024-11-20 13:44:33.754394] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.924 13:44:33 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.924 malloc0 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.924 13:44:33 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.924 [2024-11-20 13:44:33.896174] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:41.924 [2024-11-20 13:44:33.896233] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:41.924 [2024-11-20 13:44:33.896250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:41.924 [2024-11-20 13:44:33.903944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:41.924 [2024-11-20 13:44:33.903988] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:41.924 1 00:24:41.924 13:44:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.924 13:44:33 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76005 00:24:43.299 [2024-11-20 13:44:34.904037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:43.299 [2024-11-20 13:44:34.910901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:43.299 [2024-11-20 13:44:34.910932] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:44.235 [2024-11-20 13:44:35.910975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:44.235 [2024-11-20 13:44:35.919894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:44.235 [2024-11-20 13:44:35.919948] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:45.171 [2024-11-20 13:44:36.919984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:45.171 [2024-11-20 13:44:36.925909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:45.171 [2024-11-20 13:44:36.925947] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:45.171 [2024-11-20 13:44:36.925965] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:45.171 [2024-11-20 13:44:36.926098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:25:07.095 [2024-11-20 13:44:57.654927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:25:07.095 [2024-11-20 13:44:57.661303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:25:07.095 [2024-11-20 13:44:57.667181] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:25:07.095 [2024-11-20 13:44:57.667230] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:33.674 00:25:33.674 fio_test: (groupid=0, jobs=1): err= 0: pid=76008: Wed Nov 20 13:45:22 2024 00:25:33.674 read: IOPS=9479, BW=37.0MiB/s (38.8MB/s)(2222MiB/60002msec) 00:25:33.674 slat (nsec): min=1977, max=723165, avg=6886.27, stdev=3172.94 00:25:33.674 clat (usec): min=1109, max=30216k, avg=6688.71, stdev=320444.84 00:25:33.674 lat (usec): min=1127, max=30216k, avg=6695.60, stdev=320444.85 00:25:33.674 clat percentiles (msec): 00:25:33.674 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:25:33.674 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:25:33.674 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:33.674 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 14], 00:25:33.674 | 99.99th=[17113] 00:25:33.674 bw ( KiB/s): min= 2384, max=81488, per=100.00%, avg=74696.00, stdev=12726.86, samples=60 00:25:33.674 iops : min= 596, max=20372, avg=18673.95, stdev=3181.71, samples=60 00:25:33.674 write: IOPS=9465, BW=37.0MiB/s (38.8MB/s)(2219MiB/60002msec); 0 zone resets 00:25:33.674 slat (usec): min=2, max=1174, avg= 7.07, stdev= 3.27 00:25:33.674 clat (usec): min=1003, max=30216k, avg=6807.88, stdev=320683.83 00:25:33.674 lat (usec): min=1009, max=30216k, avg=6814.95, stdev=320683.83 00:25:33.674 clat percentiles (msec): 00:25:33.674 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], 00:25:33.674 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:25:33.674 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:33.674 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 14], 00:25:33.674 | 99.99th=[17113] 00:25:33.674 bw ( KiB/s): min= 2736, max=81368, per=100.00%, avg=74582.22, stdev=12638.75, samples=60 00:25:33.674 iops : min= 684, max=20342, avg=18645.50, stdev=3159.68, samples=60 00:25:33.674 lat (msec) : 2=0.04%, 4=90.62%, 10=9.26%, 20=0.07%, >=2000=0.01% 00:25:33.674 cpu : usr=6.00%, sys=12.78%, ctx=38049, majf=0, minf=14 00:25:33.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:33.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:33.674 issued rwts: total=568789,567945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:33.674 00:25:33.674 Run status group 0 (all jobs): 00:25:33.674 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=2222MiB (2330MB), run=60002-60002msec 00:25:33.674 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=2219MiB (2326MB), run=60002-60002msec 00:25:33.674 00:25:33.674 Disk stats (read/write): 00:25:33.674 ublkb1: ios=566563/565662, merge=0/0, ticks=3743475/3735554, in_queue=7479029, util=99.94% 00:25:33.674 13:45:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.674 [2024-11-20 13:45:22.726320] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:33.674 [2024-11-20 13:45:22.767030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:33.674 [2024-11-20 13:45:22.767478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:33.674 [2024-11-20 13:45:22.773930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:33.674 [2024-11-20 13:45:22.774089] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:33.674 [2024-11-20 13:45:22.774103] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.674 13:45:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.674 [2024-11-20 13:45:22.790093] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:33.674 [2024-11-20 13:45:22.797900] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:33.674 [2024-11-20 13:45:22.797973] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.674 13:45:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:33.674 13:45:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:33.674 13:45:22 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76115 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76115 ']' 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76115 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76115 00:25:33.674 killing process with pid 76115 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76115' 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76115 00:25:33.674 13:45:22 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76115 00:25:33.674 [2024-11-20 13:45:24.305068] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:33.674 [2024-11-20 13:45:24.305140] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:33.674 00:25:33.674 real 1m5.642s 00:25:33.674 user 1m50.445s 00:25:33.674 sys 0m21.439s 00:25:33.674 13:45:25 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.674 13:45:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.674 ************************************ 00:25:33.674 END TEST ublk_recovery 00:25:33.674 ************************************ 00:25:33.674 13:45:25 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:33.674 13:45:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:33.675 13:45:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.675 13:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:33.675 13:45:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:33.675 13:45:25 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:33.675 13:45:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:33.675 13:45:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.675 13:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:33.675 ************************************ 00:25:33.675 START TEST ftl 00:25:33.675 ************************************ 00:25:33.675 13:45:25 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:33.935 * Looking for test storage... 00:25:33.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.935 13:45:25 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:33.935 13:45:25 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:33.935 13:45:25 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:25:33.935 13:45:25 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.935 13:45:25 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.935 13:45:25 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.935 13:45:25 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.935 13:45:25 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.935 13:45:25 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.935 13:45:25 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:33.935 13:45:25 ftl -- scripts/common.sh@345 -- # : 1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.935 13:45:25 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.935 13:45:25 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@353 -- # local d=1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.935 13:45:25 ftl -- scripts/common.sh@355 -- # echo 1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.935 13:45:25 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@353 -- # local d=2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.935 13:45:25 ftl -- scripts/common.sh@355 -- # echo 2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.935 13:45:25 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.935 13:45:25 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.936 13:45:25 ftl -- scripts/common.sh@368 -- # return 0 00:25:33.936 13:45:25 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.936 13:45:25 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.936 --rc genhtml_branch_coverage=1 00:25:33.936 --rc genhtml_function_coverage=1 00:25:33.936 --rc genhtml_legend=1 00:25:33.936 --rc geninfo_all_blocks=1 00:25:33.936 --rc geninfo_unexecuted_blocks=1 00:25:33.936 00:25:33.936 ' 00:25:33.936 13:45:25 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.936 --rc genhtml_branch_coverage=1 00:25:33.936 --rc genhtml_function_coverage=1 00:25:33.936 --rc genhtml_legend=1 00:25:33.936 --rc geninfo_all_blocks=1 00:25:33.936 --rc geninfo_unexecuted_blocks=1 00:25:33.936 00:25:33.936 ' 00:25:33.936 13:45:25 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.936 --rc genhtml_branch_coverage=1 00:25:33.936 --rc genhtml_function_coverage=1 00:25:33.936 --rc genhtml_legend=1 00:25:33.936 --rc geninfo_all_blocks=1 00:25:33.936 --rc geninfo_unexecuted_blocks=1 00:25:33.936 00:25:33.936 ' 00:25:33.936 13:45:25 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:33.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.936 --rc genhtml_branch_coverage=1 00:25:33.936 --rc genhtml_function_coverage=1 00:25:33.936 --rc genhtml_legend=1 00:25:33.936 --rc geninfo_all_blocks=1 00:25:33.936 --rc geninfo_unexecuted_blocks=1 00:25:33.936 00:25:33.936 ' 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:33.936 13:45:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:33.936 13:45:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.936 13:45:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.936 13:45:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:33.936 13:45:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:33.936 13:45:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.936 13:45:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.936 13:45:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.936 13:45:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:33.936 13:45:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:33.936 13:45:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:33.936 13:45:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:33.936 13:45:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.936 13:45:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.936 13:45:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:33.936 13:45:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:33.936 13:45:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:33.936 13:45:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:33.936 13:45:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:33.936 13:45:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:33.936 13:45:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:33.936 13:45:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.936 13:45:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:33.936 13:45:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:34.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:34.455 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:34.455 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:34.455 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:34.455 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:34.455 13:45:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76897 00:25:34.455 13:45:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:34.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.455 13:45:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76897 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@835 -- # '[' -z 76897 ']' 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.455 13:45:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:34.715 [2024-11-20 13:45:26.525762] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:25:34.715 [2024-11-20 13:45:26.526014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76897 ] 00:25:34.715 [2024-11-20 13:45:26.702630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.973 [2024-11-20 13:45:26.805720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.590 13:45:27 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.590 13:45:27 ftl -- common/autotest_common.sh@868 -- # return 0 00:25:35.590 13:45:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:35.849 13:45:27 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:37.292 13:45:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:37.292 13:45:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:37.548 13:45:29 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:37.548 13:45:29 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:37.548 13:45:29 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@50 -- # break 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:38.115 13:45:29 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:38.373 13:45:30 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:38.373 13:45:30 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:38.373 13:45:30 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:38.373 13:45:30 ftl -- ftl/ftl.sh@63 -- # break 00:25:38.373 13:45:30 ftl -- ftl/ftl.sh@66 -- # killprocess 76897 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 76897 ']' 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@958 -- # kill -0 76897 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@959 -- # uname 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76897 00:25:38.373 killing process with pid 76897 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76897' 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@973 -- # kill 76897 00:25:38.373 13:45:30 ftl -- common/autotest_common.sh@978 -- # wait 76897 00:25:40.276 13:45:32 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:40.276 13:45:32 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:40.276 13:45:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:40.276 13:45:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.276 13:45:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:40.276 ************************************ 00:25:40.276 START TEST ftl_fio_basic 00:25:40.276 ************************************ 00:25:40.276 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:40.536 * Looking for test storage... 00:25:40.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.536 --rc genhtml_branch_coverage=1 00:25:40.536 --rc genhtml_function_coverage=1 00:25:40.536 --rc genhtml_legend=1 00:25:40.536 --rc geninfo_all_blocks=1 00:25:40.536 --rc geninfo_unexecuted_blocks=1 00:25:40.536 00:25:40.536 ' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.536 --rc genhtml_branch_coverage=1 00:25:40.536 --rc genhtml_function_coverage=1 00:25:40.536 --rc genhtml_legend=1 00:25:40.536 --rc geninfo_all_blocks=1 00:25:40.536 --rc geninfo_unexecuted_blocks=1 00:25:40.536 00:25:40.536 ' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.536 --rc genhtml_branch_coverage=1 00:25:40.536 --rc genhtml_function_coverage=1 00:25:40.536 --rc genhtml_legend=1 00:25:40.536 --rc geninfo_all_blocks=1 00:25:40.536 --rc geninfo_unexecuted_blocks=1 00:25:40.536 00:25:40.536 ' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.536 --rc genhtml_branch_coverage=1 00:25:40.536 --rc genhtml_function_coverage=1 00:25:40.536 --rc genhtml_legend=1 00:25:40.536 --rc geninfo_all_blocks=1 00:25:40.536 --rc geninfo_unexecuted_blocks=1 00:25:40.536 00:25:40.536 ' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77041 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77041 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77041 ']' 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.536 13:45:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:40.796 [2024-11-20 13:45:32.603357] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:25:40.796 [2024-11-20 13:45:32.603743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77041 ] 00:25:40.796 [2024-11-20 13:45:32.789972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:41.055 [2024-11-20 13:45:32.897632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.055 [2024-11-20 13:45:32.897761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.055 [2024-11-20 13:45:32.897785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:41.992 13:45:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:42.250 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:42.509 { 00:25:42.509 "name": "nvme0n1", 00:25:42.509 "aliases": [ 00:25:42.509 "c120edc9-6c50-4d4f-9ddc-5a96d43cb6dd" 00:25:42.509 ], 00:25:42.509 "product_name": "NVMe disk", 00:25:42.509 "block_size": 4096, 00:25:42.509 "num_blocks": 1310720, 00:25:42.509 "uuid": "c120edc9-6c50-4d4f-9ddc-5a96d43cb6dd", 00:25:42.509 "numa_id": -1, 00:25:42.509 "assigned_rate_limits": { 00:25:42.509 "rw_ios_per_sec": 0, 00:25:42.509 "rw_mbytes_per_sec": 0, 00:25:42.509 "r_mbytes_per_sec": 0, 00:25:42.509 "w_mbytes_per_sec": 0 00:25:42.509 }, 00:25:42.509 "claimed": false, 00:25:42.509 "zoned": false, 00:25:42.509 "supported_io_types": { 00:25:42.509 "read": true, 00:25:42.509 "write": true, 00:25:42.509 "unmap": true, 00:25:42.509 "flush": true, 00:25:42.509 "reset": true, 00:25:42.509 "nvme_admin": true, 00:25:42.509 "nvme_io": true, 00:25:42.509 "nvme_io_md": false, 00:25:42.509 "write_zeroes": true, 00:25:42.509 "zcopy": false, 00:25:42.509 "get_zone_info": false, 00:25:42.509 "zone_management": false, 00:25:42.509 "zone_append": false, 00:25:42.509 "compare": true, 00:25:42.509 "compare_and_write": false, 00:25:42.509 "abort": true, 00:25:42.509 "seek_hole": false, 00:25:42.509 "seek_data": false, 00:25:42.509 "copy": true, 00:25:42.509 "nvme_iov_md": false 00:25:42.509 }, 00:25:42.509 "driver_specific": { 00:25:42.509 "nvme": [ 00:25:42.509 { 00:25:42.509 "pci_address": "0000:00:11.0", 00:25:42.509 "trid": { 00:25:42.509 "trtype": "PCIe", 00:25:42.509 "traddr": "0000:00:11.0" 00:25:42.509 }, 00:25:42.509 "ctrlr_data": { 00:25:42.509 "cntlid": 0, 00:25:42.509 "vendor_id": "0x1b36", 00:25:42.509 "model_number": "QEMU NVMe Ctrl", 00:25:42.509 "serial_number": "12341", 00:25:42.509 "firmware_revision": "8.0.0", 00:25:42.509 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:42.509 "oacs": { 00:25:42.509 "security": 0, 00:25:42.509 "format": 1, 00:25:42.509 "firmware": 0, 00:25:42.509 "ns_manage": 1 00:25:42.509 }, 00:25:42.509 "multi_ctrlr": false, 00:25:42.509 "ana_reporting": false 00:25:42.509 }, 00:25:42.509 "vs": { 00:25:42.509 "nvme_version": "1.4" 00:25:42.509 }, 00:25:42.509 "ns_data": { 00:25:42.509 "id": 1, 00:25:42.509 "can_share": false 00:25:42.509 } 00:25:42.509 } 00:25:42.509 ], 00:25:42.509 "mp_policy": "active_passive" 00:25:42.509 } 00:25:42.509 } 00:25:42.509 ]' 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:42.509 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:43.076 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:43.076 13:45:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:43.336 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=28fde660-d4e4-4fb3-b506-47466cfb9d1b 00:25:43.336 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 28fde660-d4e4-4fb3-b506-47466cfb9d1b 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=48f75dee-cda0-4469-bc58-d8119ced020c 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=48f75dee-cda0-4469-bc58-d8119ced020c 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=48f75dee-cda0-4469-bc58-d8119ced020c 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:43.595 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:44.162 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:44.162 { 00:25:44.162 "name": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:44.162 "aliases": [ 00:25:44.162 "lvs/nvme0n1p0" 00:25:44.162 ], 00:25:44.162 "product_name": "Logical Volume", 00:25:44.162 "block_size": 4096, 00:25:44.162 "num_blocks": 26476544, 00:25:44.162 "uuid": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:44.162 "assigned_rate_limits": { 00:25:44.162 "rw_ios_per_sec": 0, 00:25:44.162 "rw_mbytes_per_sec": 0, 00:25:44.162 "r_mbytes_per_sec": 0, 00:25:44.162 "w_mbytes_per_sec": 0 00:25:44.162 }, 00:25:44.162 "claimed": false, 00:25:44.162 "zoned": false, 00:25:44.162 "supported_io_types": { 00:25:44.162 "read": true, 00:25:44.162 "write": true, 00:25:44.162 "unmap": true, 00:25:44.162 "flush": false, 00:25:44.162 "reset": true, 00:25:44.162 "nvme_admin": false, 00:25:44.162 "nvme_io": false, 00:25:44.162 "nvme_io_md": false, 00:25:44.162 "write_zeroes": true, 00:25:44.162 "zcopy": false, 00:25:44.162 "get_zone_info": false, 00:25:44.162 "zone_management": false, 00:25:44.162 "zone_append": false, 00:25:44.162 "compare": false, 00:25:44.162 "compare_and_write": false, 00:25:44.162 "abort": false, 00:25:44.162 "seek_hole": true, 00:25:44.162 "seek_data": true, 00:25:44.162 "copy": false, 00:25:44.162 "nvme_iov_md": false 00:25:44.162 }, 00:25:44.162 "driver_specific": { 00:25:44.162 "lvol": { 00:25:44.162 "lvol_store_uuid": "28fde660-d4e4-4fb3-b506-47466cfb9d1b", 00:25:44.162 "base_bdev": "nvme0n1", 00:25:44.162 "thin_provision": true, 00:25:44.162 "num_allocated_clusters": 0, 00:25:44.162 "snapshot": false, 00:25:44.162 "clone": false, 00:25:44.162 "esnap_clone": false 00:25:44.162 } 00:25:44.162 } 00:25:44.162 } 00:25:44.162 ]' 00:25:44.162 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:44.162 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:44.162 13:45:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:44.162 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=48f75dee-cda0-4469-bc58-d8119ced020c 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:44.422 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:44.681 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:44.681 { 00:25:44.681 "name": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:44.681 "aliases": [ 00:25:44.681 "lvs/nvme0n1p0" 00:25:44.681 ], 00:25:44.681 "product_name": "Logical Volume", 00:25:44.681 "block_size": 4096, 00:25:44.681 "num_blocks": 26476544, 00:25:44.681 "uuid": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:44.681 "assigned_rate_limits": { 00:25:44.681 "rw_ios_per_sec": 0, 00:25:44.681 "rw_mbytes_per_sec": 0, 00:25:44.681 "r_mbytes_per_sec": 0, 00:25:44.681 "w_mbytes_per_sec": 0 00:25:44.681 }, 00:25:44.681 "claimed": false, 00:25:44.681 "zoned": false, 00:25:44.681 "supported_io_types": { 00:25:44.681 "read": true, 00:25:44.681 "write": true, 00:25:44.681 "unmap": true, 00:25:44.681 "flush": false, 00:25:44.681 "reset": true, 00:25:44.682 "nvme_admin": false, 00:25:44.682 "nvme_io": false, 00:25:44.682 "nvme_io_md": false, 00:25:44.682 "write_zeroes": true, 00:25:44.682 "zcopy": false, 00:25:44.682 "get_zone_info": false, 00:25:44.682 "zone_management": false, 00:25:44.682 "zone_append": false, 00:25:44.682 "compare": false, 00:25:44.682 "compare_and_write": false, 00:25:44.682 "abort": false, 00:25:44.682 "seek_hole": true, 00:25:44.682 "seek_data": true, 00:25:44.682 "copy": false, 00:25:44.682 "nvme_iov_md": false 00:25:44.682 }, 00:25:44.682 "driver_specific": { 00:25:44.682 "lvol": { 00:25:44.682 "lvol_store_uuid": "28fde660-d4e4-4fb3-b506-47466cfb9d1b", 00:25:44.682 "base_bdev": "nvme0n1", 00:25:44.682 "thin_provision": true, 00:25:44.682 "num_allocated_clusters": 0, 00:25:44.682 "snapshot": false, 00:25:44.682 "clone": false, 00:25:44.682 "esnap_clone": false 00:25:44.682 } 00:25:44.682 } 00:25:44.682 } 00:25:44.682 ]' 00:25:44.682 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:44.682 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:44.682 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:44.941 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:44.941 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:44.941 13:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:44.941 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:44.941 13:45:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:45.200 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=48f75dee-cda0-4469-bc58-d8119ced020c 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:45.200 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48f75dee-cda0-4469-bc58-d8119ced020c 00:25:45.461 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:45.461 { 00:25:45.461 "name": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:45.461 "aliases": [ 00:25:45.461 "lvs/nvme0n1p0" 00:25:45.461 ], 00:25:45.461 "product_name": "Logical Volume", 00:25:45.461 "block_size": 4096, 00:25:45.461 "num_blocks": 26476544, 00:25:45.461 "uuid": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:45.461 "assigned_rate_limits": { 00:25:45.461 "rw_ios_per_sec": 0, 00:25:45.461 "rw_mbytes_per_sec": 0, 00:25:45.461 "r_mbytes_per_sec": 0, 00:25:45.461 "w_mbytes_per_sec": 0 00:25:45.461 }, 00:25:45.461 "claimed": false, 00:25:45.461 "zoned": false, 00:25:45.461 "supported_io_types": { 00:25:45.461 "read": true, 00:25:45.461 "write": true, 00:25:45.461 "unmap": true, 00:25:45.461 "flush": false, 00:25:45.461 "reset": true, 00:25:45.461 "nvme_admin": false, 00:25:45.461 "nvme_io": false, 00:25:45.461 "nvme_io_md": false, 00:25:45.461 "write_zeroes": true, 00:25:45.461 "zcopy": false, 00:25:45.461 "get_zone_info": false, 00:25:45.461 "zone_management": false, 00:25:45.461 "zone_append": false, 00:25:45.461 "compare": false, 00:25:45.461 "compare_and_write": false, 00:25:45.461 "abort": false, 00:25:45.461 "seek_hole": true, 00:25:45.461 "seek_data": true, 00:25:45.461 "copy": false, 00:25:45.461 "nvme_iov_md": false 00:25:45.461 }, 00:25:45.461 "driver_specific": { 00:25:45.461 "lvol": { 00:25:45.461 "lvol_store_uuid": "28fde660-d4e4-4fb3-b506-47466cfb9d1b", 00:25:45.461 "base_bdev": "nvme0n1", 00:25:45.461 "thin_provision": true, 00:25:45.461 "num_allocated_clusters": 0, 00:25:45.461 "snapshot": false, 00:25:45.461 "clone": false, 00:25:45.461 "esnap_clone": false 00:25:45.461 } 00:25:45.461 } 00:25:45.461 } 00:25:45.461 ]' 00:25:45.461 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:45.461 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:45.461 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:45.721 13:45:37 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48f75dee-cda0-4469-bc58-d8119ced020c -c nvc0n1p0 --l2p_dram_limit 60 00:25:45.982 [2024-11-20 13:45:37.800918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.800984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:45.982 [2024-11-20 13:45:37.801013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:45.982 [2024-11-20 13:45:37.801026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.801153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.801180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.982 [2024-11-20 13:45:37.801196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:45.982 [2024-11-20 13:45:37.801207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.801273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:45.982 [2024-11-20 13:45:37.802291] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:45.982 [2024-11-20 13:45:37.802346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.802363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.982 [2024-11-20 13:45:37.802378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:25:45.982 [2024-11-20 13:45:37.802390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.802563] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1154ba6d-20fb-410f-8df0-46aca16587f0 00:25:45.982 [2024-11-20 13:45:37.803744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.803947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:45.982 [2024-11-20 13:45:37.803976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:45.982 [2024-11-20 13:45:37.803991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.808733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.808805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.982 [2024-11-20 13:45:37.808824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:25:45.982 [2024-11-20 13:45:37.808838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.809028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.809056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.982 [2024-11-20 13:45:37.809071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:25:45.982 [2024-11-20 13:45:37.809090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.809175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.809197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:45.982 [2024-11-20 13:45:37.809211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:45.982 [2024-11-20 13:45:37.809225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.809285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:45.982 [2024-11-20 13:45:37.813988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.814032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.982 [2024-11-20 13:45:37.814051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:25:45.982 [2024-11-20 13:45:37.814066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.814144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.814167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:45.982 [2024-11-20 13:45:37.814182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:45.982 [2024-11-20 13:45:37.814193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.814292] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:45.982 [2024-11-20 13:45:37.814492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:45.982 [2024-11-20 13:45:37.814522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:45.982 [2024-11-20 13:45:37.814538] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:45.982 [2024-11-20 13:45:37.814556] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:45.982 [2024-11-20 13:45:37.814570] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:45.982 [2024-11-20 13:45:37.814585] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:45.982 [2024-11-20 13:45:37.814597] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:45.982 [2024-11-20 13:45:37.814611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:45.982 [2024-11-20 13:45:37.814634] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:45.982 [2024-11-20 13:45:37.814650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.982 [2024-11-20 13:45:37.814667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:45.982 [2024-11-20 13:45:37.814682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:25:45.982 [2024-11-20 13:45:37.814693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.982 [2024-11-20 13:45:37.814825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.983 [2024-11-20 13:45:37.814847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:45.983 [2024-11-20 13:45:37.814863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:45.983 [2024-11-20 13:45:37.814904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.983 [2024-11-20 13:45:37.815067] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:45.983 [2024-11-20 13:45:37.815086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:45.983 [2024-11-20 13:45:37.815113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:45.983 [2024-11-20 13:45:37.815156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:45.983 [2024-11-20 13:45:37.815201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.983 [2024-11-20 13:45:37.815225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:45.983 [2024-11-20 13:45:37.815236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:45.983 [2024-11-20 13:45:37.815248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:45.983 [2024-11-20 13:45:37.815259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:45.983 [2024-11-20 13:45:37.815272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:45.983 [2024-11-20 13:45:37.815283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:45.983 [2024-11-20 13:45:37.815311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:45.983 [2024-11-20 13:45:37.815346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:45.983 [2024-11-20 13:45:37.815381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:45.983 [2024-11-20 13:45:37.815416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:45.983 [2024-11-20 13:45:37.815451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:45.983 [2024-11-20 13:45:37.815488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.983 [2024-11-20 13:45:37.815512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:45.983 [2024-11-20 13:45:37.815546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:45.983 [2024-11-20 13:45:37.815560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:45.983 [2024-11-20 13:45:37.815571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:45.983 [2024-11-20 13:45:37.815584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:45.983 [2024-11-20 13:45:37.815594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:45.983 [2024-11-20 13:45:37.815620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:45.983 [2024-11-20 13:45:37.815632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815642] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:45.983 [2024-11-20 13:45:37.815656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:45.983 [2024-11-20 13:45:37.815668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:45.983 [2024-11-20 13:45:37.815693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:45.983 [2024-11-20 13:45:37.815708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:45.983 [2024-11-20 13:45:37.815718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:45.983 [2024-11-20 13:45:37.815732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:45.983 [2024-11-20 13:45:37.815742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:45.983 [2024-11-20 13:45:37.815754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:45.983 [2024-11-20 13:45:37.815776] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:45.983 [2024-11-20 13:45:37.815794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.983 [2024-11-20 13:45:37.815807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:45.983 [2024-11-20 13:45:37.815821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:45.983 [2024-11-20 13:45:37.815832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:45.983 [2024-11-20 13:45:37.815845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:45.983 [2024-11-20 13:45:37.815857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:45.983 [2024-11-20 13:45:37.815884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:45.983 [2024-11-20 13:45:37.815898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:45.983 [2024-11-20 13:45:37.815912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:45.983 [2024-11-20 13:45:37.815924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:45.984 [2024-11-20 13:45:37.815941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.815953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.815967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.815980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.815995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:45.984 [2024-11-20 13:45:37.816006] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:45.984 [2024-11-20 13:45:37.816025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.816042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:45.984 [2024-11-20 13:45:37.816056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:45.984 [2024-11-20 13:45:37.816068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:45.984 [2024-11-20 13:45:37.816081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:45.984 [2024-11-20 13:45:37.816094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.984 [2024-11-20 13:45:37.816114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:45.984 [2024-11-20 13:45:37.816127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:25:45.984 [2024-11-20 13:45:37.816140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.984 [2024-11-20 13:45:37.816220] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:45.984 [2024-11-20 13:45:37.816249] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:50.167 [2024-11-20 13:45:41.381290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.381605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:50.167 [2024-11-20 13:45:41.381790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3565.091 ms 00:25:50.167 [2024-11-20 13:45:41.381883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.421364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.421669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.167 [2024-11-20 13:45:41.421824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.003 ms 00:25:50.167 [2024-11-20 13:45:41.422054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.422455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.422644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.167 [2024-11-20 13:45:41.422796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:50.167 [2024-11-20 13:45:41.422901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.487817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.488133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.167 [2024-11-20 13:45:41.488293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.718 ms 00:25:50.167 [2024-11-20 13:45:41.488445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.488568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.488630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.167 [2024-11-20 13:45:41.488747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:50.167 [2024-11-20 13:45:41.488810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.489438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.489595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.167 [2024-11-20 13:45:41.489729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:25:50.167 [2024-11-20 13:45:41.489798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.490137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.490213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.167 [2024-11-20 13:45:41.490354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:50.167 [2024-11-20 13:45:41.490390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.167 [2024-11-20 13:45:41.512384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.167 [2024-11-20 13:45:41.512668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.168 [2024-11-20 13:45:41.512704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.943 ms 00:25:50.168 [2024-11-20 13:45:41.512723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.529217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:50.168 [2024-11-20 13:45:41.545469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.545564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.168 [2024-11-20 13:45:41.545593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.529 ms 00:25:50.168 [2024-11-20 13:45:41.545612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.652837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.652924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:50.168 [2024-11-20 13:45:41.652969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.148 ms 00:25:50.168 [2024-11-20 13:45:41.652984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.653277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.653305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.168 [2024-11-20 13:45:41.653327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:25:50.168 [2024-11-20 13:45:41.653341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.693756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.693857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:50.168 [2024-11-20 13:45:41.693919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.296 ms 00:25:50.168 [2024-11-20 13:45:41.693947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.733595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.733676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:50.168 [2024-11-20 13:45:41.733705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.549 ms 00:25:50.168 [2024-11-20 13:45:41.733720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.734685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.734727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.168 [2024-11-20 13:45:41.734749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:25:50.168 [2024-11-20 13:45:41.734763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.850819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.850928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:50.168 [2024-11-20 13:45:41.850962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.907 ms 00:25:50.168 [2024-11-20 13:45:41.850982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.893660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.893743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:50.168 [2024-11-20 13:45:41.893771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.510 ms 00:25:50.168 [2024-11-20 13:45:41.893786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.937931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.938015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:50.168 [2024-11-20 13:45:41.938043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.070 ms 00:25:50.168 [2024-11-20 13:45:41.938058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.978443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.978521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.168 [2024-11-20 13:45:41.978549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.315 ms 00:25:50.168 [2024-11-20 13:45:41.978564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.978640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.978659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.168 [2024-11-20 13:45:41.978685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:50.168 [2024-11-20 13:45:41.978699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.978971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.168 [2024-11-20 13:45:41.979000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.168 [2024-11-20 13:45:41.979019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:50.168 [2024-11-20 13:45:41.979038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.168 [2024-11-20 13:45:41.980493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4178.967 ms, result 0 00:25:50.168 { 00:25:50.168 "name": "ftl0", 00:25:50.168 "uuid": "1154ba6d-20fb-410f-8df0-46aca16587f0" 00:25:50.168 } 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:50.168 13:45:41 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:50.168 13:45:42 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:50.426 13:45:42 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:50.684 [ 00:25:50.684 { 00:25:50.684 "name": "ftl0", 00:25:50.684 "aliases": [ 00:25:50.684 "1154ba6d-20fb-410f-8df0-46aca16587f0" 00:25:50.684 ], 00:25:50.684 "product_name": "FTL disk", 00:25:50.684 "block_size": 4096, 00:25:50.684 "num_blocks": 20971520, 00:25:50.684 "uuid": "1154ba6d-20fb-410f-8df0-46aca16587f0", 00:25:50.684 "assigned_rate_limits": { 00:25:50.684 "rw_ios_per_sec": 0, 00:25:50.684 "rw_mbytes_per_sec": 0, 00:25:50.684 "r_mbytes_per_sec": 0, 00:25:50.684 "w_mbytes_per_sec": 0 00:25:50.684 }, 00:25:50.684 "claimed": false, 00:25:50.684 "zoned": false, 00:25:50.684 "supported_io_types": { 00:25:50.684 "read": true, 00:25:50.684 "write": true, 00:25:50.684 "unmap": true, 00:25:50.684 "flush": true, 00:25:50.684 "reset": false, 00:25:50.684 "nvme_admin": false, 00:25:50.684 "nvme_io": false, 00:25:50.684 "nvme_io_md": false, 00:25:50.684 "write_zeroes": true, 00:25:50.684 "zcopy": false, 00:25:50.684 "get_zone_info": false, 00:25:50.684 "zone_management": false, 00:25:50.684 "zone_append": false, 00:25:50.684 "compare": false, 00:25:50.684 "compare_and_write": false, 00:25:50.684 "abort": false, 00:25:50.684 "seek_hole": false, 00:25:50.684 "seek_data": false, 00:25:50.684 "copy": false, 00:25:50.684 "nvme_iov_md": false 00:25:50.684 }, 00:25:50.685 "driver_specific": { 00:25:50.685 "ftl": { 00:25:50.685 "base_bdev": "48f75dee-cda0-4469-bc58-d8119ced020c", 00:25:50.685 "cache": "nvc0n1p0" 00:25:50.685 } 00:25:50.685 } 00:25:50.685 } 00:25:50.685 ] 00:25:50.685 13:45:42 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:50.685 13:45:42 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:50.685 13:45:42 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:51.250 13:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:51.250 13:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:51.509 [2024-11-20 13:45:43.466047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.466322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:51.509 [2024-11-20 13:45:43.466355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:51.509 [2024-11-20 13:45:43.466371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.466433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:51.509 [2024-11-20 13:45:43.469854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.469899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:51.509 [2024-11-20 13:45:43.469919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:25:51.509 [2024-11-20 13:45:43.469932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.470509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.470539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:51.509 [2024-11-20 13:45:43.470557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:51.509 [2024-11-20 13:45:43.470569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.473974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.474025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:51.509 [2024-11-20 13:45:43.474046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:25:51.509 [2024-11-20 13:45:43.474058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.480816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.480888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:51.509 [2024-11-20 13:45:43.480911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.709 ms 00:25:51.509 [2024-11-20 13:45:43.480931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.513217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.513295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:51.509 [2024-11-20 13:45:43.513320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.101 ms 00:25:51.509 [2024-11-20 13:45:43.513333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.533795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.533910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:51.509 [2024-11-20 13:45:43.533951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.335 ms 00:25:51.509 [2024-11-20 13:45:43.533965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.509 [2024-11-20 13:45:43.534282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.509 [2024-11-20 13:45:43.534316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:51.509 [2024-11-20 13:45:43.534334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:25:51.509 [2024-11-20 13:45:43.534346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.768 [2024-11-20 13:45:43.567032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.768 [2024-11-20 13:45:43.567105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:51.768 [2024-11-20 13:45:43.567130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.640 ms 00:25:51.768 [2024-11-20 13:45:43.567143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.768 [2024-11-20 13:45:43.599037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.768 [2024-11-20 13:45:43.599114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:51.768 [2024-11-20 13:45:43.599140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.791 ms 00:25:51.768 [2024-11-20 13:45:43.599152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.768 [2024-11-20 13:45:43.630902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.768 [2024-11-20 13:45:43.630979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:51.768 [2024-11-20 13:45:43.631004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.639 ms 00:25:51.768 [2024-11-20 13:45:43.631017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.768 [2024-11-20 13:45:43.662879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.768 [2024-11-20 13:45:43.662957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:51.768 [2024-11-20 13:45:43.662982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.635 ms 00:25:51.768 [2024-11-20 13:45:43.662994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.768 [2024-11-20 13:45:43.663083] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:51.768 [2024-11-20 13:45:43.663111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:51.768 [2024-11-20 13:45:43.663830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.663999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:51.769 [2024-11-20 13:45:43.664686] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:51.769 [2024-11-20 13:45:43.664702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1154ba6d-20fb-410f-8df0-46aca16587f0 00:25:51.769 [2024-11-20 13:45:43.664714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:51.769 [2024-11-20 13:45:43.664729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:51.769 [2024-11-20 13:45:43.664740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:51.769 [2024-11-20 13:45:43.664757] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:51.769 [2024-11-20 13:45:43.664769] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:51.769 [2024-11-20 13:45:43.664783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:51.769 [2024-11-20 13:45:43.664794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:51.769 [2024-11-20 13:45:43.664806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:51.769 [2024-11-20 13:45:43.664816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:51.769 [2024-11-20 13:45:43.664832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.769 [2024-11-20 13:45:43.664844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:51.769 [2024-11-20 13:45:43.664859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:25:51.769 [2024-11-20 13:45:43.664886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.682163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.769 [2024-11-20 13:45:43.682233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:51.769 [2024-11-20 13:45:43.682256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.156 ms 00:25:51.769 [2024-11-20 13:45:43.682269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.682755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.769 [2024-11-20 13:45:43.682777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:51.769 [2024-11-20 13:45:43.682793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:51.769 [2024-11-20 13:45:43.682804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.742433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.769 [2024-11-20 13:45:43.742520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.769 [2024-11-20 13:45:43.742543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.769 [2024-11-20 13:45:43.742556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.742669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.769 [2024-11-20 13:45:43.742686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.769 [2024-11-20 13:45:43.742701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.769 [2024-11-20 13:45:43.742713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.742913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.769 [2024-11-20 13:45:43.742937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.769 [2024-11-20 13:45:43.742953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.769 [2024-11-20 13:45:43.742965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.769 [2024-11-20 13:45:43.743006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.769 [2024-11-20 13:45:43.743020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.769 [2024-11-20 13:45:43.743034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.769 [2024-11-20 13:45:43.743045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.855195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.855272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.028 [2024-11-20 13:45:43.855294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.855306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.942292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.942361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.028 [2024-11-20 13:45:43.942384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.942397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.942544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.942563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.028 [2024-11-20 13:45:43.942582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.942594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.942709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.942728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.028 [2024-11-20 13:45:43.942744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.942755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.942935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.942956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.028 [2024-11-20 13:45:43.942972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.942987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.943072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.943097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.028 [2024-11-20 13:45:43.943113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.943125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.943194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.943209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.028 [2024-11-20 13:45:43.943223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.943234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.943305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.028 [2024-11-20 13:45:43.943322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.028 [2024-11-20 13:45:43.943337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.028 [2024-11-20 13:45:43.943348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.028 [2024-11-20 13:45:43.943540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 477.468 ms, result 0 00:25:52.028 true 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77041 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77041 ']' 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77041 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77041 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.028 killing process with pid 77041 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77041' 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77041 00:25:52.028 13:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77041 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:57.297 13:45:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:57.297 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:57.297 fio-3.35 00:25:57.297 Starting 1 thread 00:26:02.564 00:26:02.564 test: (groupid=0, jobs=1): err= 0: pid=77274: Wed Nov 20 13:45:53 2024 00:26:02.564 read: IOPS=1029, BW=68.3MiB/s (71.7MB/s)(255MiB/3724msec) 00:26:02.564 slat (nsec): min=5757, max=48591, avg=7979.99, stdev=3813.32 00:26:02.564 clat (usec): min=305, max=3126, avg=433.27, stdev=72.69 00:26:02.564 lat (usec): min=311, max=3132, avg=441.25, stdev=73.35 00:26:02.564 clat percentiles (usec): 00:26:02.564 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 379], 00:26:02.564 | 30.00th=[ 392], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 441], 00:26:02.564 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 537], 00:26:02.564 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 816], 99.95th=[ 1221], 00:26:02.564 | 99.99th=[ 3130] 00:26:02.564 write: IOPS=1036, BW=68.8MiB/s (72.2MB/s)(256MiB/3720msec); 0 zone resets 00:26:02.564 slat (usec): min=20, max=758, avg=25.73, stdev=13.61 00:26:02.564 clat (usec): min=350, max=1116, avg=486.91, stdev=64.58 00:26:02.564 lat (usec): min=372, max=1174, avg=512.64, stdev=66.09 00:26:02.564 clat percentiles (usec): 00:26:02.564 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 437], 00:26:02.564 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 490], 00:26:02.564 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 594], 00:26:02.564 | 99.00th=[ 693], 99.50th=[ 742], 99.90th=[ 857], 99.95th=[ 1020], 00:26:02.564 | 99.99th=[ 1123] 00:26:02.564 bw ( KiB/s): min=67864, max=72488, per=99.34%, avg=70020.57, stdev=1427.31, samples=7 00:26:02.564 iops : min= 998, max= 1066, avg=1029.71, stdev=20.99, samples=7 00:26:02.564 lat (usec) : 500=76.06%, 750=23.66%, 1000=0.23% 00:26:02.564 lat (msec) : 2=0.04%, 4=0.01% 00:26:02.564 cpu : usr=99.01%, sys=0.16%, ctx=11, majf=0, minf=1169 00:26:02.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.564 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:02.564 00:26:02.564 Run status group 0 (all jobs): 00:26:02.564 READ: bw=68.3MiB/s (71.7MB/s), 68.3MiB/s-68.3MiB/s (71.7MB/s-71.7MB/s), io=255MiB (267MB), run=3724-3724msec 00:26:02.564 WRITE: bw=68.8MiB/s (72.2MB/s), 68.8MiB/s-68.8MiB/s (72.2MB/s-72.2MB/s), io=256MiB (269MB), run=3720-3720msec 00:26:03.940 ----------------------------------------------------- 00:26:03.940 Suppressions used: 00:26:03.940 count bytes template 00:26:03.940 1 5 /usr/src/fio/parse.c 00:26:03.940 1 8 libtcmalloc_minimal.so 00:26:03.940 1 904 libcrypto.so 00:26:03.940 ----------------------------------------------------- 00:26:03.940 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:03.940 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:03.940 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:03.940 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:03.940 fio-3.35 00:26:03.940 Starting 2 threads 00:26:42.649 00:26:42.649 first_half: (groupid=0, jobs=1): err= 0: pid=77377: Wed Nov 20 13:46:29 2024 00:26:42.649 read: IOPS=2010, BW=8044KiB/s (8237kB/s)(255MiB/32443msec) 00:26:42.649 slat (usec): min=4, max=110, avg= 8.65, stdev= 3.15 00:26:42.649 clat (usec): min=856, max=420327, avg=48365.47, stdev=24222.02 00:26:42.649 lat (usec): min=874, max=420337, avg=48374.12, stdev=24222.37 00:26:42.649 clat percentiles (msec): 00:26:42.649 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:42.649 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 45], 00:26:42.649 | 70.00th=[ 47], 80.00th=[ 53], 90.00th=[ 57], 95.00th=[ 71], 00:26:42.649 | 99.00th=[ 180], 99.50th=[ 226], 99.90th=[ 275], 99.95th=[ 305], 00:26:42.649 | 99.99th=[ 409] 00:26:42.649 write: IOPS=2401, BW=9607KiB/s (9837kB/s)(256MiB/27287msec); 0 zone resets 00:26:42.649 slat (usec): min=5, max=276, avg=10.65, stdev= 6.00 00:26:42.649 clat (usec): min=472, max=124928, avg=15161.98, stdev=25606.91 00:26:42.649 lat (usec): min=502, max=124939, avg=15172.63, stdev=25607.16 00:26:42.649 clat percentiles (usec): 00:26:42.649 | 1.00th=[ 1045], 5.00th=[ 1352], 10.00th=[ 1565], 20.00th=[ 1991], 00:26:42.649 | 30.00th=[ 3326], 40.00th=[ 5473], 50.00th=[ 6652], 60.00th=[ 7832], 00:26:42.649 | 70.00th=[ 9503], 80.00th=[ 15795], 90.00th=[ 27919], 95.00th=[ 90702], 00:26:42.649 | 99.00th=[107480], 99.50th=[114820], 99.90th=[122160], 99.95th=[123208], 00:26:42.649 | 99.99th=[124257] 00:26:42.649 bw ( KiB/s): min= 944, max=40872, per=100.00%, avg=19418.07, stdev=10106.75, samples=27 00:26:42.649 iops : min= 236, max=10218, avg=4854.52, stdev=2526.69, samples=27 00:26:42.649 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.37% 00:26:42.649 lat (msec) : 2=9.87%, 4=6.37%, 10=19.77%, 20=7.29%, 50=39.30% 00:26:42.649 lat (msec) : 100=14.61%, 250=2.24%, 500=0.15% 00:26:42.649 cpu : usr=98.91%, sys=0.14%, ctx=54, majf=0, minf=5515 00:26:42.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.649 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:42.649 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:42.649 second_half: (groupid=0, jobs=1): err= 0: pid=77378: Wed Nov 20 13:46:29 2024 00:26:42.649 read: IOPS=1999, BW=7997KiB/s (8189kB/s)(255MiB/32604msec) 00:26:42.649 slat (nsec): min=4762, max=56833, avg=8542.52, stdev=3040.65 00:26:42.649 clat (usec): min=890, max=406945, avg=47879.43, stdev=25467.63 00:26:42.649 lat (usec): min=909, max=406955, avg=47887.98, stdev=25467.92 00:26:42.649 clat percentiles (msec): 00:26:42.649 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:42.649 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 45], 00:26:42.649 | 70.00th=[ 46], 80.00th=[ 52], 90.00th=[ 55], 95.00th=[ 63], 00:26:42.649 | 99.00th=[ 188], 99.50th=[ 209], 99.90th=[ 264], 99.95th=[ 296], 00:26:42.649 | 99.99th=[ 393] 00:26:42.649 write: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(256MiB/26038msec); 0 zone resets 00:26:42.649 slat (usec): min=6, max=192, avg=10.50, stdev= 6.00 00:26:42.649 clat (usec): min=494, max=125564, avg=16026.90, stdev=26948.49 00:26:42.649 lat (usec): min=503, max=125575, avg=16037.39, stdev=26948.58 00:26:42.649 clat percentiles (usec): 00:26:42.649 | 1.00th=[ 1037], 5.00th=[ 1336], 10.00th=[ 1532], 20.00th=[ 1827], 00:26:42.649 | 30.00th=[ 2212], 40.00th=[ 3621], 50.00th=[ 5342], 60.00th=[ 7177], 00:26:42.649 | 70.00th=[ 10552], 80.00th=[ 16909], 90.00th=[ 57410], 95.00th=[ 91751], 00:26:42.649 | 99.00th=[107480], 99.50th=[113771], 99.90th=[122160], 99.95th=[124257], 00:26:42.649 | 99.99th=[125305] 00:26:42.649 bw ( KiB/s): min= 224, max=32408, per=94.09%, avg=18078.90, stdev=9354.74, samples=29 00:26:42.649 iops : min= 56, max= 8102, avg=4519.72, stdev=2338.68, samples=29 00:26:42.649 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.37% 00:26:42.649 lat (msec) : 2=12.25%, 4=8.72%, 10=13.82%, 20=7.99%, 50=40.02% 00:26:42.649 lat (msec) : 100=14.01%, 250=2.68%, 500=0.11% 00:26:42.649 cpu : usr=98.89%, sys=0.18%, ctx=58, majf=0, minf=5600 00:26:42.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.649 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:42.649 issued rwts: total=65185,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:42.650 00:26:42.650 Run status group 0 (all jobs): 00:26:42.650 READ: bw=15.6MiB/s (16.4MB/s), 7997KiB/s-8044KiB/s (8189kB/s-8237kB/s), io=509MiB (534MB), run=32443-32604msec 00:26:42.650 WRITE: bw=18.8MiB/s (19.7MB/s), 9607KiB/s-9.83MiB/s (9837kB/s-10.3MB/s), io=512MiB (537MB), run=26038-27287msec 00:26:42.650 ----------------------------------------------------- 00:26:42.650 Suppressions used: 00:26:42.650 count bytes template 00:26:42.650 2 10 /usr/src/fio/parse.c 00:26:42.650 2 192 /usr/src/fio/iolog.c 00:26:42.650 1 8 libtcmalloc_minimal.so 00:26:42.650 1 904 libcrypto.so 00:26:42.650 ----------------------------------------------------- 00:26:42.650 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:42.650 13:46:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:42.650 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:42.650 fio-3.35 00:26:42.650 Starting 1 thread 00:27:00.800 00:27:00.800 test: (groupid=0, jobs=1): err= 0: pid=77773: Wed Nov 20 13:46:51 2024 00:27:00.800 read: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(255MiB/10859msec) 00:27:00.800 slat (nsec): min=4668, max=44524, avg=7379.57, stdev=2331.33 00:27:00.800 clat (usec): min=770, max=40847, avg=21305.29, stdev=2489.56 00:27:00.800 lat (usec): min=775, max=40853, avg=21312.67, stdev=2489.55 00:27:00.800 clat percentiles (usec): 00:27:00.800 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:27:00.800 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:27:00.800 | 70.00th=[21627], 80.00th=[22938], 90.00th=[25560], 95.00th=[26608], 00:27:00.800 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31851], 99.95th=[35390], 00:27:00.800 | 99.99th=[39584] 00:27:00.800 write: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(256MiB/6161msec); 0 zone resets 00:27:00.800 slat (usec): min=6, max=184, avg=10.14, stdev= 5.20 00:27:00.800 clat (usec): min=735, max=86846, avg=11964.54, stdev=15656.44 00:27:00.800 lat (usec): min=743, max=86855, avg=11974.68, stdev=15656.49 00:27:00.800 clat percentiles (usec): 00:27:00.800 | 1.00th=[ 1057], 5.00th=[ 1237], 10.00th=[ 1369], 20.00th=[ 1582], 00:27:00.800 | 30.00th=[ 1827], 40.00th=[ 2474], 50.00th=[ 7570], 60.00th=[ 8586], 00:27:00.800 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[43254], 95.00th=[49021], 00:27:00.800 | 99.00th=[55313], 99.50th=[57934], 99.90th=[79168], 99.95th=[82314], 00:27:00.800 | 99.99th=[85459] 00:27:00.800 bw ( KiB/s): min=10072, max=59928, per=94.78%, avg=40329.85, stdev=13273.08, samples=13 00:27:00.800 iops : min= 2518, max=14982, avg=10082.46, stdev=3318.27, samples=13 00:27:00.800 lat (usec) : 750=0.01%, 1000=0.28% 00:27:00.800 lat (msec) : 2=17.15%, 4=3.39%, 10=15.67%, 20=25.37%, 50=36.02% 00:27:00.800 lat (msec) : 100=2.12% 00:27:00.800 cpu : usr=98.76%, sys=0.25%, ctx=27, majf=0, minf=5565 00:27:00.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:00.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.800 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:00.800 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:00.800 00:27:00.800 Run status group 0 (all jobs): 00:27:00.800 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=255MiB (267MB), run=10859-10859msec 00:27:00.800 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=256MiB (268MB), run=6161-6161msec 00:27:01.061 ----------------------------------------------------- 00:27:01.061 Suppressions used: 00:27:01.061 count bytes template 00:27:01.061 1 5 /usr/src/fio/parse.c 00:27:01.061 2 192 /usr/src/fio/iolog.c 00:27:01.061 1 8 libtcmalloc_minimal.so 00:27:01.061 1 904 libcrypto.so 00:27:01.061 ----------------------------------------------------- 00:27:01.061 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:27:01.319 Remove shared memory files 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58238 /dev/shm/spdk_tgt_trace.pid75964 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:27:01.319 00:27:01.319 real 1m20.900s 00:27:01.319 user 3m2.841s 00:27:01.319 sys 0m4.100s 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.319 13:46:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:01.319 ************************************ 00:27:01.319 END TEST ftl_fio_basic 00:27:01.319 ************************************ 00:27:01.319 13:46:53 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:01.319 13:46:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:01.319 13:46:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.319 13:46:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:01.319 ************************************ 00:27:01.319 START TEST ftl_bdevperf 00:27:01.319 ************************************ 00:27:01.319 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:01.319 * Looking for test storage... 00:27:01.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.319 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.319 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.319 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.660 --rc genhtml_branch_coverage=1 00:27:01.660 --rc genhtml_function_coverage=1 00:27:01.660 --rc genhtml_legend=1 00:27:01.660 --rc geninfo_all_blocks=1 00:27:01.660 --rc geninfo_unexecuted_blocks=1 00:27:01.660 00:27:01.660 ' 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.660 --rc genhtml_branch_coverage=1 00:27:01.660 --rc genhtml_function_coverage=1 00:27:01.660 --rc genhtml_legend=1 00:27:01.660 --rc geninfo_all_blocks=1 00:27:01.660 --rc geninfo_unexecuted_blocks=1 00:27:01.660 00:27:01.660 ' 00:27:01.660 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.660 --rc genhtml_branch_coverage=1 00:27:01.660 --rc genhtml_function_coverage=1 00:27:01.660 --rc genhtml_legend=1 00:27:01.661 --rc geninfo_all_blocks=1 00:27:01.661 --rc geninfo_unexecuted_blocks=1 00:27:01.661 00:27:01.661 ' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.661 --rc genhtml_branch_coverage=1 00:27:01.661 --rc genhtml_function_coverage=1 00:27:01.661 --rc genhtml_legend=1 00:27:01.661 --rc geninfo_all_blocks=1 00:27:01.661 --rc geninfo_unexecuted_blocks=1 00:27:01.661 00:27:01.661 ' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78041 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78041 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78041 ']' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.661 13:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.661 [2024-11-20 13:46:53.527207] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:27:01.661 [2024-11-20 13:46:53.527366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78041 ] 00:27:01.919 [2024-11-20 13:46:53.720408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.919 [2024-11-20 13:46:53.829291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:27:02.853 13:46:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:03.112 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:03.679 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:03.679 { 00:27:03.679 "name": "nvme0n1", 00:27:03.679 "aliases": [ 00:27:03.679 "ace8f846-28ee-4749-ba41-4e18a31d5e16" 00:27:03.679 ], 00:27:03.679 "product_name": "NVMe disk", 00:27:03.679 "block_size": 4096, 00:27:03.679 "num_blocks": 1310720, 00:27:03.679 "uuid": "ace8f846-28ee-4749-ba41-4e18a31d5e16", 00:27:03.679 "numa_id": -1, 00:27:03.679 "assigned_rate_limits": { 00:27:03.679 "rw_ios_per_sec": 0, 00:27:03.679 "rw_mbytes_per_sec": 0, 00:27:03.679 "r_mbytes_per_sec": 0, 00:27:03.679 "w_mbytes_per_sec": 0 00:27:03.679 }, 00:27:03.679 "claimed": true, 00:27:03.679 "claim_type": "read_many_write_one", 00:27:03.679 "zoned": false, 00:27:03.679 "supported_io_types": { 00:27:03.679 "read": true, 00:27:03.679 "write": true, 00:27:03.679 "unmap": true, 00:27:03.679 "flush": true, 00:27:03.679 "reset": true, 00:27:03.679 "nvme_admin": true, 00:27:03.679 "nvme_io": true, 00:27:03.679 "nvme_io_md": false, 00:27:03.679 "write_zeroes": true, 00:27:03.679 "zcopy": false, 00:27:03.679 "get_zone_info": false, 00:27:03.679 "zone_management": false, 00:27:03.679 "zone_append": false, 00:27:03.679 "compare": true, 00:27:03.679 "compare_and_write": false, 00:27:03.679 "abort": true, 00:27:03.679 "seek_hole": false, 00:27:03.679 "seek_data": false, 00:27:03.679 "copy": true, 00:27:03.679 "nvme_iov_md": false 00:27:03.679 }, 00:27:03.679 "driver_specific": { 00:27:03.679 "nvme": [ 00:27:03.679 { 00:27:03.679 "pci_address": "0000:00:11.0", 00:27:03.679 "trid": { 00:27:03.679 "trtype": "PCIe", 00:27:03.679 "traddr": "0000:00:11.0" 00:27:03.679 }, 00:27:03.679 "ctrlr_data": { 00:27:03.679 "cntlid": 0, 00:27:03.679 "vendor_id": "0x1b36", 00:27:03.679 "model_number": "QEMU NVMe Ctrl", 00:27:03.679 "serial_number": "12341", 00:27:03.679 "firmware_revision": "8.0.0", 00:27:03.679 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:03.679 "oacs": { 00:27:03.679 "security": 0, 00:27:03.679 "format": 1, 00:27:03.679 "firmware": 0, 00:27:03.679 "ns_manage": 1 00:27:03.679 }, 00:27:03.680 "multi_ctrlr": false, 00:27:03.680 "ana_reporting": false 00:27:03.680 }, 00:27:03.680 "vs": { 00:27:03.680 "nvme_version": "1.4" 00:27:03.680 }, 00:27:03.680 "ns_data": { 00:27:03.680 "id": 1, 00:27:03.680 "can_share": false 00:27:03.680 } 00:27:03.680 } 00:27:03.680 ], 00:27:03.680 "mp_policy": "active_passive" 00:27:03.680 } 00:27:03.680 } 00:27:03.680 ]' 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.680 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:03.938 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=28fde660-d4e4-4fb3-b506-47466cfb9d1b 00:27:03.938 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:27:03.938 13:46:55 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28fde660-d4e4-4fb3-b506-47466cfb9d1b 00:27:04.196 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:04.454 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa 00:27:04.454 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:05.019 13:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:05.586 { 00:27:05.586 "name": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:05.586 "aliases": [ 00:27:05.586 "lvs/nvme0n1p0" 00:27:05.586 ], 00:27:05.586 "product_name": "Logical Volume", 00:27:05.586 "block_size": 4096, 00:27:05.586 "num_blocks": 26476544, 00:27:05.586 "uuid": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:05.586 "assigned_rate_limits": { 00:27:05.586 "rw_ios_per_sec": 0, 00:27:05.586 "rw_mbytes_per_sec": 0, 00:27:05.586 "r_mbytes_per_sec": 0, 00:27:05.586 "w_mbytes_per_sec": 0 00:27:05.586 }, 00:27:05.586 "claimed": false, 00:27:05.586 "zoned": false, 00:27:05.586 "supported_io_types": { 00:27:05.586 "read": true, 00:27:05.586 "write": true, 00:27:05.586 "unmap": true, 00:27:05.586 "flush": false, 00:27:05.586 "reset": true, 00:27:05.586 "nvme_admin": false, 00:27:05.586 "nvme_io": false, 00:27:05.586 "nvme_io_md": false, 00:27:05.586 "write_zeroes": true, 00:27:05.586 "zcopy": false, 00:27:05.586 "get_zone_info": false, 00:27:05.586 "zone_management": false, 00:27:05.586 "zone_append": false, 00:27:05.586 "compare": false, 00:27:05.586 "compare_and_write": false, 00:27:05.586 "abort": false, 00:27:05.586 "seek_hole": true, 00:27:05.586 "seek_data": true, 00:27:05.586 "copy": false, 00:27:05.586 "nvme_iov_md": false 00:27:05.586 }, 00:27:05.586 "driver_specific": { 00:27:05.586 "lvol": { 00:27:05.586 "lvol_store_uuid": "ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa", 00:27:05.586 "base_bdev": "nvme0n1", 00:27:05.586 "thin_provision": true, 00:27:05.586 "num_allocated_clusters": 0, 00:27:05.586 "snapshot": false, 00:27:05.586 "clone": false, 00:27:05.586 "esnap_clone": false 00:27:05.586 } 00:27:05.586 } 00:27:05.586 } 00:27:05.586 ]' 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:27:05.586 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0f979018-314d-4030-af75-30f66e4b99fe 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:05.845 13:46:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f979018-314d-4030-af75-30f66e4b99fe 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.103 { 00:27:06.103 "name": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:06.103 "aliases": [ 00:27:06.103 "lvs/nvme0n1p0" 00:27:06.103 ], 00:27:06.103 "product_name": "Logical Volume", 00:27:06.103 "block_size": 4096, 00:27:06.103 "num_blocks": 26476544, 00:27:06.103 "uuid": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:06.103 "assigned_rate_limits": { 00:27:06.103 "rw_ios_per_sec": 0, 00:27:06.103 "rw_mbytes_per_sec": 0, 00:27:06.103 "r_mbytes_per_sec": 0, 00:27:06.103 "w_mbytes_per_sec": 0 00:27:06.103 }, 00:27:06.103 "claimed": false, 00:27:06.103 "zoned": false, 00:27:06.103 "supported_io_types": { 00:27:06.103 "read": true, 00:27:06.103 "write": true, 00:27:06.103 "unmap": true, 00:27:06.103 "flush": false, 00:27:06.103 "reset": true, 00:27:06.103 "nvme_admin": false, 00:27:06.103 "nvme_io": false, 00:27:06.103 "nvme_io_md": false, 00:27:06.103 "write_zeroes": true, 00:27:06.103 "zcopy": false, 00:27:06.103 "get_zone_info": false, 00:27:06.103 "zone_management": false, 00:27:06.103 "zone_append": false, 00:27:06.103 "compare": false, 00:27:06.103 "compare_and_write": false, 00:27:06.103 "abort": false, 00:27:06.103 "seek_hole": true, 00:27:06.103 "seek_data": true, 00:27:06.103 "copy": false, 00:27:06.103 "nvme_iov_md": false 00:27:06.103 }, 00:27:06.103 "driver_specific": { 00:27:06.103 "lvol": { 00:27:06.103 "lvol_store_uuid": "ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa", 00:27:06.103 "base_bdev": "nvme0n1", 00:27:06.103 "thin_provision": true, 00:27:06.103 "num_allocated_clusters": 0, 00:27:06.103 "snapshot": false, 00:27:06.103 "clone": false, 00:27:06.103 "esnap_clone": false 00:27:06.103 } 00:27:06.103 } 00:27:06.103 } 00:27:06.103 ]' 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:27:06.103 13:46:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0f979018-314d-4030-af75-30f66e4b99fe 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0f979018-314d-4030-af75-30f66e4b99fe 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:06.669 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f979018-314d-4030-af75-30f66e4b99fe 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.927 { 00:27:06.927 "name": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:06.927 "aliases": [ 00:27:06.927 "lvs/nvme0n1p0" 00:27:06.927 ], 00:27:06.927 "product_name": "Logical Volume", 00:27:06.927 "block_size": 4096, 00:27:06.927 "num_blocks": 26476544, 00:27:06.927 "uuid": "0f979018-314d-4030-af75-30f66e4b99fe", 00:27:06.927 "assigned_rate_limits": { 00:27:06.927 "rw_ios_per_sec": 0, 00:27:06.927 "rw_mbytes_per_sec": 0, 00:27:06.927 "r_mbytes_per_sec": 0, 00:27:06.927 "w_mbytes_per_sec": 0 00:27:06.927 }, 00:27:06.927 "claimed": false, 00:27:06.927 "zoned": false, 00:27:06.927 "supported_io_types": { 00:27:06.927 "read": true, 00:27:06.927 "write": true, 00:27:06.927 "unmap": true, 00:27:06.927 "flush": false, 00:27:06.927 "reset": true, 00:27:06.927 "nvme_admin": false, 00:27:06.927 "nvme_io": false, 00:27:06.927 "nvme_io_md": false, 00:27:06.927 "write_zeroes": true, 00:27:06.927 "zcopy": false, 00:27:06.927 "get_zone_info": false, 00:27:06.927 "zone_management": false, 00:27:06.927 "zone_append": false, 00:27:06.927 "compare": false, 00:27:06.927 "compare_and_write": false, 00:27:06.927 "abort": false, 00:27:06.927 "seek_hole": true, 00:27:06.927 "seek_data": true, 00:27:06.927 "copy": false, 00:27:06.927 "nvme_iov_md": false 00:27:06.927 }, 00:27:06.927 "driver_specific": { 00:27:06.927 "lvol": { 00:27:06.927 "lvol_store_uuid": "ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa", 00:27:06.927 "base_bdev": "nvme0n1", 00:27:06.927 "thin_provision": true, 00:27:06.927 "num_allocated_clusters": 0, 00:27:06.927 "snapshot": false, 00:27:06.927 "clone": false, 00:27:06.927 "esnap_clone": false 00:27:06.927 } 00:27:06.927 } 00:27:06.927 } 00:27:06.927 ]' 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:06.927 13:46:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:27:06.928 13:46:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f979018-314d-4030-af75-30f66e4b99fe -c nvc0n1p0 --l2p_dram_limit 20 00:27:07.186 [2024-11-20 13:46:59.220511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.186 [2024-11-20 13:46:59.220605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:07.186 [2024-11-20 13:46:59.220635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:07.186 [2024-11-20 13:46:59.220657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.186 [2024-11-20 13:46:59.220787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.186 [2024-11-20 13:46:59.220818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.186 [2024-11-20 13:46:59.220837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:07.186 [2024-11-20 13:46:59.220892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.186 [2024-11-20 13:46:59.220955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:07.186 [2024-11-20 13:46:59.222791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:07.186 [2024-11-20 13:46:59.222835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.186 [2024-11-20 13:46:59.222852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.186 [2024-11-20 13:46:59.222882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:27:07.186 [2024-11-20 13:46:59.222901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.186 [2024-11-20 13:46:59.223157] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 455b727a-bb93-4f91-af5d-bd7b6fdd7e43 00:27:07.444 [2024-11-20 13:46:59.224290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:46:59.224332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:07.444 [2024-11-20 13:46:59.224352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:07.444 [2024-11-20 13:46:59.224372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:46:59.229158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:46:59.229223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.444 [2024-11-20 13:46:59.229244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:27:07.444 [2024-11-20 13:46:59.229256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.444 [2024-11-20 13:46:59.229412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.444 [2024-11-20 13:46:59.229433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.444 [2024-11-20 13:46:59.229454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:07.444 [2024-11-20 13:46:59.229467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.229541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.445 [2024-11-20 13:46:59.229561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:07.445 [2024-11-20 13:46:59.229576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:07.445 [2024-11-20 13:46:59.229588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.229630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:07.445 [2024-11-20 13:46:59.234303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.445 [2024-11-20 13:46:59.234357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.445 [2024-11-20 13:46:59.234374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.690 ms 00:27:07.445 [2024-11-20 13:46:59.234395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.234445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.445 [2024-11-20 13:46:59.234464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:07.445 [2024-11-20 13:46:59.234476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:07.445 [2024-11-20 13:46:59.234490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.234570] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:07.445 [2024-11-20 13:46:59.234755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:07.445 [2024-11-20 13:46:59.234776] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:07.445 [2024-11-20 13:46:59.234794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:07.445 [2024-11-20 13:46:59.234810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:07.445 [2024-11-20 13:46:59.234827] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:07.445 [2024-11-20 13:46:59.234839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:07.445 [2024-11-20 13:46:59.234853] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:07.445 [2024-11-20 13:46:59.234864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:07.445 [2024-11-20 13:46:59.234895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:07.445 [2024-11-20 13:46:59.234908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.445 [2024-11-20 13:46:59.234925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:07.445 [2024-11-20 13:46:59.234938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:27:07.445 [2024-11-20 13:46:59.234952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.235047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.445 [2024-11-20 13:46:59.235066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:07.445 [2024-11-20 13:46:59.235079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:07.445 [2024-11-20 13:46:59.235094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.445 [2024-11-20 13:46:59.235197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:07.445 [2024-11-20 13:46:59.235226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:07.445 [2024-11-20 13:46:59.235243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:07.445 [2024-11-20 13:46:59.235284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:07.445 [2024-11-20 13:46:59.235319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.445 [2024-11-20 13:46:59.235344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:07.445 [2024-11-20 13:46:59.235357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:07.445 [2024-11-20 13:46:59.235368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.445 [2024-11-20 13:46:59.235396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:07.445 [2024-11-20 13:46:59.235408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:07.445 [2024-11-20 13:46:59.235424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:07.445 [2024-11-20 13:46:59.235454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:07.445 [2024-11-20 13:46:59.235491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:07.445 [2024-11-20 13:46:59.235527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:07.445 [2024-11-20 13:46:59.235561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:07.445 [2024-11-20 13:46:59.235598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:07.445 [2024-11-20 13:46:59.235634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.445 [2024-11-20 13:46:59.235658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:07.445 [2024-11-20 13:46:59.235670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:07.445 [2024-11-20 13:46:59.235681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.445 [2024-11-20 13:46:59.235694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:07.445 [2024-11-20 13:46:59.235705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:07.445 [2024-11-20 13:46:59.235718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:07.445 [2024-11-20 13:46:59.235743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:07.445 [2024-11-20 13:46:59.235754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235766] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:07.445 [2024-11-20 13:46:59.235778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:07.445 [2024-11-20 13:46:59.235792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.445 [2024-11-20 13:46:59.235822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:07.445 [2024-11-20 13:46:59.235835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:07.445 [2024-11-20 13:46:59.235848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:07.445 [2024-11-20 13:46:59.235860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:07.445 [2024-11-20 13:46:59.235888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:07.445 [2024-11-20 13:46:59.235902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:07.445 [2024-11-20 13:46:59.235919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:07.445 [2024-11-20 13:46:59.235934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.445 [2024-11-20 13:46:59.235950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:07.445 [2024-11-20 13:46:59.235961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:07.445 [2024-11-20 13:46:59.235975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:07.445 [2024-11-20 13:46:59.235987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:07.445 [2024-11-20 13:46:59.236001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:07.445 [2024-11-20 13:46:59.236013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:07.445 [2024-11-20 13:46:59.236026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:07.445 [2024-11-20 13:46:59.236038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:07.445 [2024-11-20 13:46:59.236053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:07.445 [2024-11-20 13:46:59.236065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:07.445 [2024-11-20 13:46:59.236078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:07.445 [2024-11-20 13:46:59.236090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:07.445 [2024-11-20 13:46:59.236103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:07.446 [2024-11-20 13:46:59.236115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:07.446 [2024-11-20 13:46:59.236128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:07.446 [2024-11-20 13:46:59.236142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.446 [2024-11-20 13:46:59.236159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:07.446 [2024-11-20 13:46:59.236171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:07.446 [2024-11-20 13:46:59.236185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:07.446 [2024-11-20 13:46:59.236197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:07.446 [2024-11-20 13:46:59.236212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.446 [2024-11-20 13:46:59.236226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:07.446 [2024-11-20 13:46:59.236240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:27:07.446 [2024-11-20 13:46:59.236252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.446 [2024-11-20 13:46:59.236302] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:07.446 [2024-11-20 13:46:59.236320] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:09.401 [2024-11-20 13:47:01.200237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.200319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:09.401 [2024-11-20 13:47:01.200350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1963.935 ms 00:27:09.401 [2024-11-20 13:47:01.200363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.233623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.233692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.401 [2024-11-20 13:47:01.233716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.973 ms 00:27:09.401 [2024-11-20 13:47:01.233729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.233936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.233959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:09.401 [2024-11-20 13:47:01.233979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:09.401 [2024-11-20 13:47:01.233991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.285658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.285730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.401 [2024-11-20 13:47:01.285756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.604 ms 00:27:09.401 [2024-11-20 13:47:01.285770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.285837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.285857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.401 [2024-11-20 13:47:01.285887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:09.401 [2024-11-20 13:47:01.285902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.286335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.286366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.401 [2024-11-20 13:47:01.286384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:27:09.401 [2024-11-20 13:47:01.286397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.286547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.286571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.401 [2024-11-20 13:47:01.286590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:09.401 [2024-11-20 13:47:01.286602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.303577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.303647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.401 [2024-11-20 13:47:01.303671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.946 ms 00:27:09.401 [2024-11-20 13:47:01.303685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.317459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:27:09.401 [2024-11-20 13:47:01.322567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.322639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:09.401 [2024-11-20 13:47:01.322660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.733 ms 00:27:09.401 [2024-11-20 13:47:01.322697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.386768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.386930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:09.401 [2024-11-20 13:47:01.386972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.988 ms 00:27:09.401 [2024-11-20 13:47:01.387000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.387344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.387397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:09.401 [2024-11-20 13:47:01.387423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:27:09.401 [2024-11-20 13:47:01.387448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.401 [2024-11-20 13:47:01.420995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.401 [2024-11-20 13:47:01.421097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:09.401 [2024-11-20 13:47:01.421119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.373 ms 00:27:09.401 [2024-11-20 13:47:01.421134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.660 [2024-11-20 13:47:01.453416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.660 [2024-11-20 13:47:01.453507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:09.660 [2024-11-20 13:47:01.453529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.187 ms 00:27:09.660 [2024-11-20 13:47:01.453544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.660 [2024-11-20 13:47:01.454445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.660 [2024-11-20 13:47:01.454485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:09.660 [2024-11-20 13:47:01.454501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:27:09.660 [2024-11-20 13:47:01.454516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.660 [2024-11-20 13:47:01.569205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.660 [2024-11-20 13:47:01.569371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:09.660 [2024-11-20 13:47:01.569408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.566 ms 00:27:09.660 [2024-11-20 13:47:01.569434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.660 [2024-11-20 13:47:01.619103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.660 [2024-11-20 13:47:01.619232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:09.660 [2024-11-20 13:47:01.619273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.319 ms 00:27:09.660 [2024-11-20 13:47:01.619298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.660 [2024-11-20 13:47:01.668272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.660 [2024-11-20 13:47:01.668439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:09.660 [2024-11-20 13:47:01.668476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.733 ms 00:27:09.660 [2024-11-20 13:47:01.668506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.917 [2024-11-20 13:47:01.718237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.917 [2024-11-20 13:47:01.718370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:09.917 [2024-11-20 13:47:01.718405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.579 ms 00:27:09.917 [2024-11-20 13:47:01.718431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.917 [2024-11-20 13:47:01.718594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.917 [2024-11-20 13:47:01.718638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:09.917 [2024-11-20 13:47:01.718688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:09.917 [2024-11-20 13:47:01.718716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.917 [2024-11-20 13:47:01.718993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.917 [2024-11-20 13:47:01.719043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:09.917 [2024-11-20 13:47:01.719068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:09.917 [2024-11-20 13:47:01.719093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.917 [2024-11-20 13:47:01.720527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2499.375 ms, result 0 00:27:09.917 { 00:27:09.917 "name": "ftl0", 00:27:09.917 "uuid": "455b727a-bb93-4f91-af5d-bd7b6fdd7e43" 00:27:09.917 } 00:27:09.917 13:47:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:27:09.917 13:47:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:27:09.917 13:47:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:27:10.175 13:47:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:27:10.434 [2024-11-20 13:47:02.330097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:10.434 I/O size of 69632 is greater than zero copy threshold (65536). 00:27:10.434 Zero copy mechanism will not be used. 00:27:10.434 Running I/O for 4 seconds... 00:27:12.301 2203.00 IOPS, 146.29 MiB/s [2024-11-20T13:47:05.718Z] 2202.00 IOPS, 146.23 MiB/s [2024-11-20T13:47:06.653Z] 2202.33 IOPS, 146.25 MiB/s [2024-11-20T13:47:06.653Z] 2151.50 IOPS, 142.87 MiB/s 00:27:14.614 Latency(us) 00:27:14.614 [2024-11-20T13:47:06.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.614 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:27:14.614 ftl0 : 4.00 2150.63 142.81 0.00 0.00 486.88 223.42 4587.52 00:27:14.614 [2024-11-20T13:47:06.653Z] =================================================================================================================== 00:27:14.614 [2024-11-20T13:47:06.653Z] Total : 2150.63 142.81 0.00 0.00 486.88 223.42 4587.52 00:27:14.614 [2024-11-20 13:47:06.342094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:14.614 { 00:27:14.614 "results": [ 00:27:14.614 { 00:27:14.614 "job": "ftl0", 00:27:14.614 "core_mask": "0x1", 00:27:14.614 "workload": "randwrite", 00:27:14.614 "status": "finished", 00:27:14.614 "queue_depth": 1, 00:27:14.614 "io_size": 69632, 00:27:14.614 "runtime": 4.002091, 00:27:14.614 "iops": 2150.6257603837594, 00:27:14.614 "mibps": 142.81499190048402, 00:27:14.614 "io_failed": 0, 00:27:14.614 "io_timeout": 0, 00:27:14.614 "avg_latency_us": 486.88286785597353, 00:27:14.614 "min_latency_us": 223.4181818181818, 00:27:14.614 "max_latency_us": 4587.52 00:27:14.614 } 00:27:14.614 ], 00:27:14.614 "core_count": 1 00:27:14.614 } 00:27:14.614 13:47:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:27:14.614 [2024-11-20 13:47:06.468966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:14.614 Running I/O for 4 seconds... 00:27:16.484 7482.00 IOPS, 29.23 MiB/s [2024-11-20T13:47:09.898Z] 7261.50 IOPS, 28.37 MiB/s [2024-11-20T13:47:10.831Z] 7219.33 IOPS, 28.20 MiB/s [2024-11-20T13:47:10.831Z] 7137.25 IOPS, 27.88 MiB/s 00:27:18.792 Latency(us) 00:27:18.792 [2024-11-20T13:47:10.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.792 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.792 ftl0 : 4.02 7129.00 27.85 0.00 0.00 17900.60 351.88 40274.85 00:27:18.792 [2024-11-20T13:47:10.831Z] =================================================================================================================== 00:27:18.792 [2024-11-20T13:47:10.831Z] Total : 7129.00 27.85 0.00 0.00 17900.60 0.00 40274.85 00:27:18.792 [2024-11-20 13:47:10.501987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:18.792 { 00:27:18.792 "results": [ 00:27:18.792 { 00:27:18.792 "job": "ftl0", 00:27:18.792 "core_mask": "0x1", 00:27:18.792 "workload": "randwrite", 00:27:18.792 "status": "finished", 00:27:18.792 "queue_depth": 128, 00:27:18.792 "io_size": 4096, 00:27:18.792 "runtime": 4.022582, 00:27:18.792 "iops": 7129.003212364596, 00:27:18.792 "mibps": 27.847668798299203, 00:27:18.792 "io_failed": 0, 00:27:18.792 "io_timeout": 0, 00:27:18.792 "avg_latency_us": 17900.60403173909, 00:27:18.792 "min_latency_us": 351.88363636363636, 00:27:18.792 "max_latency_us": 40274.85090909091 00:27:18.792 } 00:27:18.792 ], 00:27:18.792 "core_count": 1 00:27:18.792 } 00:27:18.792 13:47:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:18.792 [2024-11-20 13:47:10.660888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:18.792 Running I/O for 4 seconds... 00:27:20.706 5309.00 IOPS, 20.74 MiB/s [2024-11-20T13:47:13.695Z] 5299.50 IOPS, 20.70 MiB/s [2024-11-20T13:47:15.071Z] 5272.67 IOPS, 20.60 MiB/s [2024-11-20T13:47:15.071Z] 5371.00 IOPS, 20.98 MiB/s 00:27:23.032 Latency(us) 00:27:23.032 [2024-11-20T13:47:15.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.032 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:23.032 Verification LBA range: start 0x0 length 0x1400000 00:27:23.032 ftl0 : 4.01 5384.03 21.03 0.00 0.00 23693.09 383.53 43134.60 00:27:23.032 [2024-11-20T13:47:15.071Z] =================================================================================================================== 00:27:23.032 [2024-11-20T13:47:15.071Z] Total : 5384.03 21.03 0.00 0.00 23693.09 0.00 43134.60 00:27:23.032 [2024-11-20 13:47:14.693123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:23.032 { 00:27:23.032 "results": [ 00:27:23.032 { 00:27:23.032 "job": "ftl0", 00:27:23.032 "core_mask": "0x1", 00:27:23.032 "workload": "verify", 00:27:23.032 "status": "finished", 00:27:23.032 "verify_range": { 00:27:23.032 "start": 0, 00:27:23.032 "length": 20971520 00:27:23.032 }, 00:27:23.032 "queue_depth": 128, 00:27:23.032 "io_size": 4096, 00:27:23.032 "runtime": 4.013539, 00:27:23.032 "iops": 5384.026416586459, 00:27:23.032 "mibps": 21.031353189790856, 00:27:23.032 "io_failed": 0, 00:27:23.032 "io_timeout": 0, 00:27:23.032 "avg_latency_us": 23693.09452509266, 00:27:23.032 "min_latency_us": 383.5345454545455, 00:27:23.032 "max_latency_us": 43134.60363636364 00:27:23.032 } 00:27:23.032 ], 00:27:23.032 "core_count": 1 00:27:23.032 } 00:27:23.032 13:47:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:23.032 [2024-11-20 13:47:15.006611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.032 [2024-11-20 13:47:15.006918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:23.032 [2024-11-20 13:47:15.006964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:23.032 [2024-11-20 13:47:15.006981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.032 [2024-11-20 13:47:15.007030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:23.032 [2024-11-20 13:47:15.010360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.032 [2024-11-20 13:47:15.010521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:23.032 [2024-11-20 13:47:15.010556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:27:23.032 [2024-11-20 13:47:15.010570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.032 [2024-11-20 13:47:15.012277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.032 [2024-11-20 13:47:15.012321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:23.032 [2024-11-20 13:47:15.012345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:27:23.032 [2024-11-20 13:47:15.012361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.190626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.190723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:23.291 [2024-11-20 13:47:15.190752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.221 ms 00:27:23.291 [2024-11-20 13:47:15.190766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.197531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.197729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:23.291 [2024-11-20 13:47:15.197766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:27:23.291 [2024-11-20 13:47:15.197781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.229675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.229931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:23.291 [2024-11-20 13:47:15.230103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.745 ms 00:27:23.291 [2024-11-20 13:47:15.230257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.249255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.249482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:23.291 [2024-11-20 13:47:15.249616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.821 ms 00:27:23.291 [2024-11-20 13:47:15.249672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.250114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.250255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:23.291 [2024-11-20 13:47:15.250375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:27:23.291 [2024-11-20 13:47:15.250492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.282744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.283018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:23.291 [2024-11-20 13:47:15.283145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.168 ms 00:27:23.291 [2024-11-20 13:47:15.283260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.291 [2024-11-20 13:47:15.315209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.291 [2024-11-20 13:47:15.315433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:23.291 [2024-11-20 13:47:15.315581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.795 ms 00:27:23.291 [2024-11-20 13:47:15.315634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.551 [2024-11-20 13:47:15.346776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.551 [2024-11-20 13:47:15.347013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:23.551 [2024-11-20 13:47:15.347153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.950 ms 00:27:23.551 [2024-11-20 13:47:15.347206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.551 [2024-11-20 13:47:15.378618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.551 [2024-11-20 13:47:15.378856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:23.551 [2024-11-20 13:47:15.379038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.140 ms 00:27:23.551 [2024-11-20 13:47:15.379092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.551 [2024-11-20 13:47:15.379188] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:23.551 [2024-11-20 13:47:15.379362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.379996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:23.551 [2024-11-20 13:47:15.380295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.380998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:23.552 [2024-11-20 13:47:15.381156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:23.552 [2024-11-20 13:47:15.381170] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 455b727a-bb93-4f91-af5d-bd7b6fdd7e43 00:27:23.552 [2024-11-20 13:47:15.381183] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:23.552 [2024-11-20 13:47:15.381199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:23.552 [2024-11-20 13:47:15.381210] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:23.552 [2024-11-20 13:47:15.381224] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:23.552 [2024-11-20 13:47:15.381235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:23.552 [2024-11-20 13:47:15.381249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:23.552 [2024-11-20 13:47:15.381261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:23.552 [2024-11-20 13:47:15.381275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:23.552 [2024-11-20 13:47:15.381285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:23.552 [2024-11-20 13:47:15.381300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.552 [2024-11-20 13:47:15.381313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:23.552 [2024-11-20 13:47:15.381328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:27:23.552 [2024-11-20 13:47:15.381339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.398092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.552 [2024-11-20 13:47:15.398148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:23.552 [2024-11-20 13:47:15.398172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.665 ms 00:27:23.552 [2024-11-20 13:47:15.398185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.398633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.552 [2024-11-20 13:47:15.398654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:23.552 [2024-11-20 13:47:15.398679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:27:23.552 [2024-11-20 13:47:15.398693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.444864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.552 [2024-11-20 13:47:15.444946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.552 [2024-11-20 13:47:15.444973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.552 [2024-11-20 13:47:15.444987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.445072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.552 [2024-11-20 13:47:15.445087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.552 [2024-11-20 13:47:15.445102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.552 [2024-11-20 13:47:15.445113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.445285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.552 [2024-11-20 13:47:15.445306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.552 [2024-11-20 13:47:15.445321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.552 [2024-11-20 13:47:15.445332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.552 [2024-11-20 13:47:15.445359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.553 [2024-11-20 13:47:15.445374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.553 [2024-11-20 13:47:15.445388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.553 [2024-11-20 13:47:15.445399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.553 [2024-11-20 13:47:15.549322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.553 [2024-11-20 13:47:15.549394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.553 [2024-11-20 13:47:15.549420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.553 [2024-11-20 13:47:15.549433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.634906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.634977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.811 [2024-11-20 13:47:15.635001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.811 [2024-11-20 13:47:15.635200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.811 [2024-11-20 13:47:15.635313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.811 [2024-11-20 13:47:15.635507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:23.811 [2024-11-20 13:47:15.635609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.811 [2024-11-20 13:47:15.635702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.635767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.811 [2024-11-20 13:47:15.635798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.811 [2024-11-20 13:47:15.635814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.811 [2024-11-20 13:47:15.635825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.811 [2024-11-20 13:47:15.636042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.385 ms, result 0 00:27:23.811 true 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78041 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78041 ']' 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78041 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.811 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78041 00:27:23.812 killing process with pid 78041 00:27:23.812 Received shutdown signal, test time was about 4.000000 seconds 00:27:23.812 00:27:23.812 Latency(us) 00:27:23.812 [2024-11-20T13:47:15.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.812 [2024-11-20T13:47:15.851Z] =================================================================================================================== 00:27:23.812 [2024-11-20T13:47:15.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.812 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.812 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.812 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78041' 00:27:23.812 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78041 00:27:23.812 13:47:15 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78041 00:27:24.746 13:47:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:24.747 Remove shared memory files 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:24.747 ************************************ 00:27:24.747 END TEST ftl_bdevperf 00:27:24.747 ************************************ 00:27:24.747 00:27:24.747 real 0m23.499s 00:27:24.747 user 0m28.381s 00:27:24.747 sys 0m1.116s 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.747 13:47:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.747 13:47:16 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:24.747 13:47:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:24.747 13:47:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.747 13:47:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:24.747 ************************************ 00:27:24.747 START TEST ftl_trim 00:27:24.747 ************************************ 00:27:24.747 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:25.006 * Looking for test storage... 00:27:25.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.006 13:47:16 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.006 --rc genhtml_branch_coverage=1 00:27:25.006 --rc genhtml_function_coverage=1 00:27:25.006 --rc genhtml_legend=1 00:27:25.006 --rc geninfo_all_blocks=1 00:27:25.006 --rc geninfo_unexecuted_blocks=1 00:27:25.006 00:27:25.006 ' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.006 --rc genhtml_branch_coverage=1 00:27:25.006 --rc genhtml_function_coverage=1 00:27:25.006 --rc genhtml_legend=1 00:27:25.006 --rc geninfo_all_blocks=1 00:27:25.006 --rc geninfo_unexecuted_blocks=1 00:27:25.006 00:27:25.006 ' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.006 --rc genhtml_branch_coverage=1 00:27:25.006 --rc genhtml_function_coverage=1 00:27:25.006 --rc genhtml_legend=1 00:27:25.006 --rc geninfo_all_blocks=1 00:27:25.006 --rc geninfo_unexecuted_blocks=1 00:27:25.006 00:27:25.006 ' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.006 --rc genhtml_branch_coverage=1 00:27:25.006 --rc genhtml_function_coverage=1 00:27:25.006 --rc genhtml_legend=1 00:27:25.006 --rc geninfo_all_blocks=1 00:27:25.006 --rc geninfo_unexecuted_blocks=1 00:27:25.006 00:27:25.006 ' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78389 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:25.006 13:47:16 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78389 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78389 ']' 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.006 13:47:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:25.264 [2024-11-20 13:47:17.071239] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:27:25.265 [2024-11-20 13:47:17.071624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78389 ] 00:27:25.265 [2024-11-20 13:47:17.263056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.522 [2024-11-20 13:47:17.397931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.522 [2024-11-20 13:47:17.398014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.522 [2024-11-20 13:47:17.398018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.458 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.458 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:26.458 13:47:18 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:26.717 13:47:18 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:26.717 13:47:18 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:26.717 13:47:18 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:26.717 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:26.717 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.717 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:26.717 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:26.717 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:26.976 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.976 { 00:27:26.976 "name": "nvme0n1", 00:27:26.976 "aliases": [ 00:27:26.976 "a9a14b6e-6d4c-4f4a-b9cb-4f6774061798" 00:27:26.976 ], 00:27:26.976 "product_name": "NVMe disk", 00:27:26.976 "block_size": 4096, 00:27:26.976 "num_blocks": 1310720, 00:27:26.976 "uuid": "a9a14b6e-6d4c-4f4a-b9cb-4f6774061798", 00:27:26.976 "numa_id": -1, 00:27:26.976 "assigned_rate_limits": { 00:27:26.976 "rw_ios_per_sec": 0, 00:27:26.976 "rw_mbytes_per_sec": 0, 00:27:26.976 "r_mbytes_per_sec": 0, 00:27:26.976 "w_mbytes_per_sec": 0 00:27:26.976 }, 00:27:26.976 "claimed": true, 00:27:26.976 "claim_type": "read_many_write_one", 00:27:26.976 "zoned": false, 00:27:26.976 "supported_io_types": { 00:27:26.976 "read": true, 00:27:26.976 "write": true, 00:27:26.976 "unmap": true, 00:27:26.976 "flush": true, 00:27:26.976 "reset": true, 00:27:26.976 "nvme_admin": true, 00:27:26.976 "nvme_io": true, 00:27:26.976 "nvme_io_md": false, 00:27:26.976 "write_zeroes": true, 00:27:26.976 "zcopy": false, 00:27:26.976 "get_zone_info": false, 00:27:26.976 "zone_management": false, 00:27:26.976 "zone_append": false, 00:27:26.976 "compare": true, 00:27:26.976 "compare_and_write": false, 00:27:26.976 "abort": true, 00:27:26.976 "seek_hole": false, 00:27:26.976 "seek_data": false, 00:27:26.976 "copy": true, 00:27:26.976 "nvme_iov_md": false 00:27:26.976 }, 00:27:26.976 "driver_specific": { 00:27:26.976 "nvme": [ 00:27:26.976 { 00:27:26.976 "pci_address": "0000:00:11.0", 00:27:26.976 "trid": { 00:27:26.976 "trtype": "PCIe", 00:27:26.976 "traddr": "0000:00:11.0" 00:27:26.976 }, 00:27:26.976 "ctrlr_data": { 00:27:26.976 "cntlid": 0, 00:27:26.976 "vendor_id": "0x1b36", 00:27:26.976 "model_number": "QEMU NVMe Ctrl", 00:27:26.976 "serial_number": "12341", 00:27:26.976 "firmware_revision": "8.0.0", 00:27:26.976 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:26.976 "oacs": { 00:27:26.976 "security": 0, 00:27:26.976 "format": 1, 00:27:26.976 "firmware": 0, 00:27:26.976 "ns_manage": 1 00:27:26.976 }, 00:27:26.976 "multi_ctrlr": false, 00:27:26.976 "ana_reporting": false 00:27:26.976 }, 00:27:26.976 "vs": { 00:27:26.976 "nvme_version": "1.4" 00:27:26.976 }, 00:27:26.976 "ns_data": { 00:27:26.976 "id": 1, 00:27:26.976 "can_share": false 00:27:26.976 } 00:27:26.976 } 00:27:26.976 ], 00:27:26.976 "mp_policy": "active_passive" 00:27:26.976 } 00:27:26.976 } 00:27:26.976 ]' 00:27:26.976 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.976 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.976 13:47:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:27.234 13:47:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:27.235 13:47:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:27.235 13:47:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:27.235 13:47:19 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:27.235 13:47:19 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:27.235 13:47:19 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:27.235 13:47:19 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:27.235 13:47:19 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:27.492 13:47:19 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa 00:27:27.492 13:47:19 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:27.492 13:47:19 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac2c1d26-6cfd-4028-9a5e-c62d72cbf5aa 00:27:27.751 13:47:19 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:28.062 13:47:19 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f7a082b7-001e-4ba6-a6b9-37e9e58c1d38 00:27:28.062 13:47:19 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f7a082b7-001e-4ba6-a6b9-37e9e58c1d38 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=35501125-b21f-4052-adc6-504f5406c36e 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35501125-b21f-4052-adc6-504f5406c36e 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=35501125-b21f-4052-adc6-504f5406c36e 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:28.336 13:47:20 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 35501125-b21f-4052-adc6-504f5406c36e 00:27:28.336 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35501125-b21f-4052-adc6-504f5406c36e 00:27:28.336 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:28.336 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:28.336 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:28.336 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35501125-b21f-4052-adc6-504f5406c36e 00:27:28.595 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:28.595 { 00:27:28.595 "name": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:28.595 "aliases": [ 00:27:28.595 "lvs/nvme0n1p0" 00:27:28.595 ], 00:27:28.595 "product_name": "Logical Volume", 00:27:28.595 "block_size": 4096, 00:27:28.595 "num_blocks": 26476544, 00:27:28.595 "uuid": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:28.595 "assigned_rate_limits": { 00:27:28.595 "rw_ios_per_sec": 0, 00:27:28.595 "rw_mbytes_per_sec": 0, 00:27:28.595 "r_mbytes_per_sec": 0, 00:27:28.595 "w_mbytes_per_sec": 0 00:27:28.595 }, 00:27:28.595 "claimed": false, 00:27:28.595 "zoned": false, 00:27:28.595 "supported_io_types": { 00:27:28.595 "read": true, 00:27:28.595 "write": true, 00:27:28.595 "unmap": true, 00:27:28.595 "flush": false, 00:27:28.595 "reset": true, 00:27:28.595 "nvme_admin": false, 00:27:28.595 "nvme_io": false, 00:27:28.595 "nvme_io_md": false, 00:27:28.595 "write_zeroes": true, 00:27:28.595 "zcopy": false, 00:27:28.595 "get_zone_info": false, 00:27:28.595 "zone_management": false, 00:27:28.595 "zone_append": false, 00:27:28.595 "compare": false, 00:27:28.595 "compare_and_write": false, 00:27:28.595 "abort": false, 00:27:28.595 "seek_hole": true, 00:27:28.595 "seek_data": true, 00:27:28.595 "copy": false, 00:27:28.595 "nvme_iov_md": false 00:27:28.595 }, 00:27:28.595 "driver_specific": { 00:27:28.595 "lvol": { 00:27:28.595 "lvol_store_uuid": "f7a082b7-001e-4ba6-a6b9-37e9e58c1d38", 00:27:28.595 "base_bdev": "nvme0n1", 00:27:28.595 "thin_provision": true, 00:27:28.595 "num_allocated_clusters": 0, 00:27:28.595 "snapshot": false, 00:27:28.595 "clone": false, 00:27:28.595 "esnap_clone": false 00:27:28.595 } 00:27:28.595 } 00:27:28.595 } 00:27:28.595 ]' 00:27:28.595 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:28.854 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:28.854 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:28.854 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:28.854 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:28.854 13:47:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:28.854 13:47:20 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:28.854 13:47:20 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:28.854 13:47:20 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:29.113 13:47:21 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:29.113 13:47:21 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:29.113 13:47:21 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 35501125-b21f-4052-adc6-504f5406c36e 00:27:29.113 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35501125-b21f-4052-adc6-504f5406c36e 00:27:29.113 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:29.113 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:29.113 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:29.113 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35501125-b21f-4052-adc6-504f5406c36e 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:29.679 { 00:27:29.679 "name": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:29.679 "aliases": [ 00:27:29.679 "lvs/nvme0n1p0" 00:27:29.679 ], 00:27:29.679 "product_name": "Logical Volume", 00:27:29.679 "block_size": 4096, 00:27:29.679 "num_blocks": 26476544, 00:27:29.679 "uuid": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:29.679 "assigned_rate_limits": { 00:27:29.679 "rw_ios_per_sec": 0, 00:27:29.679 "rw_mbytes_per_sec": 0, 00:27:29.679 "r_mbytes_per_sec": 0, 00:27:29.679 "w_mbytes_per_sec": 0 00:27:29.679 }, 00:27:29.679 "claimed": false, 00:27:29.679 "zoned": false, 00:27:29.679 "supported_io_types": { 00:27:29.679 "read": true, 00:27:29.679 "write": true, 00:27:29.679 "unmap": true, 00:27:29.679 "flush": false, 00:27:29.679 "reset": true, 00:27:29.679 "nvme_admin": false, 00:27:29.679 "nvme_io": false, 00:27:29.679 "nvme_io_md": false, 00:27:29.679 "write_zeroes": true, 00:27:29.679 "zcopy": false, 00:27:29.679 "get_zone_info": false, 00:27:29.679 "zone_management": false, 00:27:29.679 "zone_append": false, 00:27:29.679 "compare": false, 00:27:29.679 "compare_and_write": false, 00:27:29.679 "abort": false, 00:27:29.679 "seek_hole": true, 00:27:29.679 "seek_data": true, 00:27:29.679 "copy": false, 00:27:29.679 "nvme_iov_md": false 00:27:29.679 }, 00:27:29.679 "driver_specific": { 00:27:29.679 "lvol": { 00:27:29.679 "lvol_store_uuid": "f7a082b7-001e-4ba6-a6b9-37e9e58c1d38", 00:27:29.679 "base_bdev": "nvme0n1", 00:27:29.679 "thin_provision": true, 00:27:29.679 "num_allocated_clusters": 0, 00:27:29.679 "snapshot": false, 00:27:29.679 "clone": false, 00:27:29.679 "esnap_clone": false 00:27:29.679 } 00:27:29.679 } 00:27:29.679 } 00:27:29.679 ]' 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:29.679 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:29.679 13:47:21 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:29.679 13:47:21 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:29.937 13:47:21 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:29.937 13:47:21 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:29.937 13:47:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 35501125-b21f-4052-adc6-504f5406c36e 00:27:29.937 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35501125-b21f-4052-adc6-504f5406c36e 00:27:29.937 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:29.937 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:29.937 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:29.937 13:47:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35501125-b21f-4052-adc6-504f5406c36e 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.197 { 00:27:30.197 "name": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:30.197 "aliases": [ 00:27:30.197 "lvs/nvme0n1p0" 00:27:30.197 ], 00:27:30.197 "product_name": "Logical Volume", 00:27:30.197 "block_size": 4096, 00:27:30.197 "num_blocks": 26476544, 00:27:30.197 "uuid": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:30.197 "assigned_rate_limits": { 00:27:30.197 "rw_ios_per_sec": 0, 00:27:30.197 "rw_mbytes_per_sec": 0, 00:27:30.197 "r_mbytes_per_sec": 0, 00:27:30.197 "w_mbytes_per_sec": 0 00:27:30.197 }, 00:27:30.197 "claimed": false, 00:27:30.197 "zoned": false, 00:27:30.197 "supported_io_types": { 00:27:30.197 "read": true, 00:27:30.197 "write": true, 00:27:30.197 "unmap": true, 00:27:30.197 "flush": false, 00:27:30.197 "reset": true, 00:27:30.197 "nvme_admin": false, 00:27:30.197 "nvme_io": false, 00:27:30.197 "nvme_io_md": false, 00:27:30.197 "write_zeroes": true, 00:27:30.197 "zcopy": false, 00:27:30.197 "get_zone_info": false, 00:27:30.197 "zone_management": false, 00:27:30.197 "zone_append": false, 00:27:30.197 "compare": false, 00:27:30.197 "compare_and_write": false, 00:27:30.197 "abort": false, 00:27:30.197 "seek_hole": true, 00:27:30.197 "seek_data": true, 00:27:30.197 "copy": false, 00:27:30.197 "nvme_iov_md": false 00:27:30.197 }, 00:27:30.197 "driver_specific": { 00:27:30.197 "lvol": { 00:27:30.197 "lvol_store_uuid": "f7a082b7-001e-4ba6-a6b9-37e9e58c1d38", 00:27:30.197 "base_bdev": "nvme0n1", 00:27:30.197 "thin_provision": true, 00:27:30.197 "num_allocated_clusters": 0, 00:27:30.197 "snapshot": false, 00:27:30.197 "clone": false, 00:27:30.197 "esnap_clone": false 00:27:30.197 } 00:27:30.197 } 00:27:30.197 } 00:27:30.197 ]' 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:30.197 13:47:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:30.197 13:47:22 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:30.197 13:47:22 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35501125-b21f-4052-adc6-504f5406c36e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:30.766 [2024-11-20 13:47:22.518922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.518986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:30.766 [2024-11-20 13:47:22.519013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:30.766 [2024-11-20 13:47:22.519027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.522940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.522988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.766 [2024-11-20 13:47:22.523010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.874 ms 00:27:30.766 [2024-11-20 13:47:22.523023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.523246] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:30.766 [2024-11-20 13:47:22.524241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:30.766 [2024-11-20 13:47:22.524470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.524500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.766 [2024-11-20 13:47:22.524518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:27:30.766 [2024-11-20 13:47:22.524530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.524770] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:27:30.766 [2024-11-20 13:47:22.525889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.525936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:30.766 [2024-11-20 13:47:22.525955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:30.766 [2024-11-20 13:47:22.525970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.530773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.530845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.766 [2024-11-20 13:47:22.530906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:27:30.766 [2024-11-20 13:47:22.530935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.531166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.531193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.766 [2024-11-20 13:47:22.531208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:30.766 [2024-11-20 13:47:22.531227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.531279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.531299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:30.766 [2024-11-20 13:47:22.531313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:30.766 [2024-11-20 13:47:22.531330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.531379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:30.766 [2024-11-20 13:47:22.536103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.536163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.766 [2024-11-20 13:47:22.536184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:27:30.766 [2024-11-20 13:47:22.536197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.536298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.536317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:30.766 [2024-11-20 13:47:22.536333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:30.766 [2024-11-20 13:47:22.536369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.536411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:30.766 [2024-11-20 13:47:22.536573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:30.766 [2024-11-20 13:47:22.536598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:30.766 [2024-11-20 13:47:22.536614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:30.766 [2024-11-20 13:47:22.536631] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:30.766 [2024-11-20 13:47:22.536645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:30.766 [2024-11-20 13:47:22.536660] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:30.766 [2024-11-20 13:47:22.536672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:30.766 [2024-11-20 13:47:22.536685] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:30.766 [2024-11-20 13:47:22.536699] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:30.766 [2024-11-20 13:47:22.536716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.536729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:30.766 [2024-11-20 13:47:22.536743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:27:30.766 [2024-11-20 13:47:22.536755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.536892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.766 [2024-11-20 13:47:22.536925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:30.766 [2024-11-20 13:47:22.536951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:30.766 [2024-11-20 13:47:22.536968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.766 [2024-11-20 13:47:22.537112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:30.766 [2024-11-20 13:47:22.537129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:30.766 [2024-11-20 13:47:22.537144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:30.766 [2024-11-20 13:47:22.537181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:30.766 [2024-11-20 13:47:22.537219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.766 [2024-11-20 13:47:22.537243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:30.766 [2024-11-20 13:47:22.537254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:30.766 [2024-11-20 13:47:22.537266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.766 [2024-11-20 13:47:22.537278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:30.766 [2024-11-20 13:47:22.537290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:30.766 [2024-11-20 13:47:22.537301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:30.766 [2024-11-20 13:47:22.537330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:30.766 [2024-11-20 13:47:22.537377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:30.766 [2024-11-20 13:47:22.537411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:30.766 [2024-11-20 13:47:22.537450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:30.766 [2024-11-20 13:47:22.537484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.766 [2024-11-20 13:47:22.537507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:30.766 [2024-11-20 13:47:22.537522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:30.766 [2024-11-20 13:47:22.537533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.766 [2024-11-20 13:47:22.537546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:30.766 [2024-11-20 13:47:22.537557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:30.766 [2024-11-20 13:47:22.537569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.766 [2024-11-20 13:47:22.537580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:30.767 [2024-11-20 13:47:22.537592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:30.767 [2024-11-20 13:47:22.537602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.767 [2024-11-20 13:47:22.537615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:30.767 [2024-11-20 13:47:22.537626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:30.767 [2024-11-20 13:47:22.537638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.767 [2024-11-20 13:47:22.537649] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:30.767 [2024-11-20 13:47:22.537664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:30.767 [2024-11-20 13:47:22.537676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.767 [2024-11-20 13:47:22.537689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.767 [2024-11-20 13:47:22.537701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:30.767 [2024-11-20 13:47:22.537716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:30.767 [2024-11-20 13:47:22.537727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:30.767 [2024-11-20 13:47:22.537741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:30.767 [2024-11-20 13:47:22.537751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:30.767 [2024-11-20 13:47:22.537764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:30.767 [2024-11-20 13:47:22.537780] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:30.767 [2024-11-20 13:47:22.537797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.537813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:30.767 [2024-11-20 13:47:22.537829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:30.767 [2024-11-20 13:47:22.537841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:30.767 [2024-11-20 13:47:22.537855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:30.767 [2024-11-20 13:47:22.537886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:30.767 [2024-11-20 13:47:22.537914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:30.767 [2024-11-20 13:47:22.537934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:30.767 [2024-11-20 13:47:22.537948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:30.767 [2024-11-20 13:47:22.537960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:30.767 [2024-11-20 13:47:22.537975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.537987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.538003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.538015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.538028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:30.767 [2024-11-20 13:47:22.538040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:30.767 [2024-11-20 13:47:22.538059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.538072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.767 [2024-11-20 13:47:22.538086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:30.767 [2024-11-20 13:47:22.538098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:30.767 [2024-11-20 13:47:22.538112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:30.767 [2024-11-20 13:47:22.538126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.767 [2024-11-20 13:47:22.538141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:30.767 [2024-11-20 13:47:22.538153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:27:30.767 [2024-11-20 13:47:22.538166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.767 [2024-11-20 13:47:22.538259] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:30.767 [2024-11-20 13:47:22.538282] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:32.670 [2024-11-20 13:47:24.505630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.505721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:32.670 [2024-11-20 13:47:24.505747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1967.381 ms 00:27:32.670 [2024-11-20 13:47:24.505765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.544818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.544930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.670 [2024-11-20 13:47:24.544963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.602 ms 00:27:32.670 [2024-11-20 13:47:24.544983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.545284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.545324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:32.670 [2024-11-20 13:47:24.545342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:32.670 [2024-11-20 13:47:24.545366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.607946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.608050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.670 [2024-11-20 13:47:24.608083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.503 ms 00:27:32.670 [2024-11-20 13:47:24.608107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.608299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.608337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.670 [2024-11-20 13:47:24.608361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:32.670 [2024-11-20 13:47:24.608384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.608818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.608886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.670 [2024-11-20 13:47:24.608912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:27:32.670 [2024-11-20 13:47:24.608934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.609177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.609204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.670 [2024-11-20 13:47:24.609224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:27:32.670 [2024-11-20 13:47:24.609249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.628175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.628246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.670 [2024-11-20 13:47:24.628267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:27:32.670 [2024-11-20 13:47:24.628282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.670 [2024-11-20 13:47:24.641825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:32.670 [2024-11-20 13:47:24.656379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.670 [2024-11-20 13:47:24.656464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.670 [2024-11-20 13:47:24.656489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.914 ms 00:27:32.670 [2024-11-20 13:47:24.656503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.715993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.716071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:32.929 [2024-11-20 13:47:24.716096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.311 ms 00:27:32.929 [2024-11-20 13:47:24.716109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.716412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.716434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.929 [2024-11-20 13:47:24.716454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:27:32.929 [2024-11-20 13:47:24.716466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.748330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.748405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:32.929 [2024-11-20 13:47:24.748431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.810 ms 00:27:32.929 [2024-11-20 13:47:24.748444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.780311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.780561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:32.929 [2024-11-20 13:47:24.780599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.695 ms 00:27:32.929 [2024-11-20 13:47:24.780613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.781442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.781480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.929 [2024-11-20 13:47:24.781499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:27:32.929 [2024-11-20 13:47:24.781512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.867639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.867918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:32.929 [2024-11-20 13:47:24.867959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.082 ms 00:27:32.929 [2024-11-20 13:47:24.867975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.903028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.903256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:32.929 [2024-11-20 13:47:24.903294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.890 ms 00:27:32.929 [2024-11-20 13:47:24.903308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.929 [2024-11-20 13:47:24.935565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.929 [2024-11-20 13:47:24.935629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:32.929 [2024-11-20 13:47:24.935652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.142 ms 00:27:32.929 [2024-11-20 13:47:24.935665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.187 [2024-11-20 13:47:24.967814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.187 [2024-11-20 13:47:24.967909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.188 [2024-11-20 13:47:24.967935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.022 ms 00:27:33.188 [2024-11-20 13:47:24.967970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.188 [2024-11-20 13:47:24.968111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.188 [2024-11-20 13:47:24.968134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.188 [2024-11-20 13:47:24.968154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:33.188 [2024-11-20 13:47:24.968166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.188 [2024-11-20 13:47:24.968268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.188 [2024-11-20 13:47:24.968292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.188 [2024-11-20 13:47:24.968309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:33.188 [2024-11-20 13:47:24.968321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.188 [2024-11-20 13:47:24.969339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.188 [2024-11-20 13:47:24.973803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2450.137 ms, result 0 00:27:33.188 { 00:27:33.188 "name": "ftl0", 00:27:33.188 "uuid": "7ae2244f-4aa0-4231-b84f-5d9369f8abc2" 00:27:33.188 } 00:27:33.188 [2024-11-20 13:47:24.974588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.188 13:47:24 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:33.188 13:47:24 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:33.446 13:47:25 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:33.705 [ 00:27:33.705 { 00:27:33.705 "name": "ftl0", 00:27:33.705 "aliases": [ 00:27:33.705 "7ae2244f-4aa0-4231-b84f-5d9369f8abc2" 00:27:33.705 ], 00:27:33.705 "product_name": "FTL disk", 00:27:33.705 "block_size": 4096, 00:27:33.705 "num_blocks": 23592960, 00:27:33.705 "uuid": "7ae2244f-4aa0-4231-b84f-5d9369f8abc2", 00:27:33.705 "assigned_rate_limits": { 00:27:33.705 "rw_ios_per_sec": 0, 00:27:33.705 "rw_mbytes_per_sec": 0, 00:27:33.705 "r_mbytes_per_sec": 0, 00:27:33.705 "w_mbytes_per_sec": 0 00:27:33.705 }, 00:27:33.705 "claimed": false, 00:27:33.705 "zoned": false, 00:27:33.705 "supported_io_types": { 00:27:33.705 "read": true, 00:27:33.705 "write": true, 00:27:33.705 "unmap": true, 00:27:33.705 "flush": true, 00:27:33.705 "reset": false, 00:27:33.705 "nvme_admin": false, 00:27:33.705 "nvme_io": false, 00:27:33.705 "nvme_io_md": false, 00:27:33.705 "write_zeroes": true, 00:27:33.705 "zcopy": false, 00:27:33.705 "get_zone_info": false, 00:27:33.705 "zone_management": false, 00:27:33.705 "zone_append": false, 00:27:33.705 "compare": false, 00:27:33.705 "compare_and_write": false, 00:27:33.705 "abort": false, 00:27:33.705 "seek_hole": false, 00:27:33.705 "seek_data": false, 00:27:33.705 "copy": false, 00:27:33.705 "nvme_iov_md": false 00:27:33.705 }, 00:27:33.705 "driver_specific": { 00:27:33.705 "ftl": { 00:27:33.705 "base_bdev": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:33.705 "cache": "nvc0n1p0" 00:27:33.705 } 00:27:33.705 } 00:27:33.705 } 00:27:33.705 ] 00:27:33.705 13:47:25 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:33.705 13:47:25 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:33.705 13:47:25 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:33.964 13:47:25 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:33.964 13:47:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:34.223 13:47:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:34.223 { 00:27:34.223 "name": "ftl0", 00:27:34.223 "aliases": [ 00:27:34.223 "7ae2244f-4aa0-4231-b84f-5d9369f8abc2" 00:27:34.223 ], 00:27:34.223 "product_name": "FTL disk", 00:27:34.223 "block_size": 4096, 00:27:34.223 "num_blocks": 23592960, 00:27:34.223 "uuid": "7ae2244f-4aa0-4231-b84f-5d9369f8abc2", 00:27:34.223 "assigned_rate_limits": { 00:27:34.223 "rw_ios_per_sec": 0, 00:27:34.223 "rw_mbytes_per_sec": 0, 00:27:34.223 "r_mbytes_per_sec": 0, 00:27:34.223 "w_mbytes_per_sec": 0 00:27:34.223 }, 00:27:34.223 "claimed": false, 00:27:34.223 "zoned": false, 00:27:34.223 "supported_io_types": { 00:27:34.223 "read": true, 00:27:34.223 "write": true, 00:27:34.223 "unmap": true, 00:27:34.223 "flush": true, 00:27:34.223 "reset": false, 00:27:34.223 "nvme_admin": false, 00:27:34.223 "nvme_io": false, 00:27:34.223 "nvme_io_md": false, 00:27:34.223 "write_zeroes": true, 00:27:34.223 "zcopy": false, 00:27:34.223 "get_zone_info": false, 00:27:34.223 "zone_management": false, 00:27:34.223 "zone_append": false, 00:27:34.223 "compare": false, 00:27:34.223 "compare_and_write": false, 00:27:34.223 "abort": false, 00:27:34.223 "seek_hole": false, 00:27:34.223 "seek_data": false, 00:27:34.223 "copy": false, 00:27:34.223 "nvme_iov_md": false 00:27:34.223 }, 00:27:34.223 "driver_specific": { 00:27:34.223 "ftl": { 00:27:34.223 "base_bdev": "35501125-b21f-4052-adc6-504f5406c36e", 00:27:34.223 "cache": "nvc0n1p0" 00:27:34.223 } 00:27:34.223 } 00:27:34.223 } 00:27:34.223 ]' 00:27:34.223 13:47:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:34.482 13:47:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:34.482 13:47:26 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:34.742 [2024-11-20 13:47:26.583401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.583473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.742 [2024-11-20 13:47:26.583498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:34.742 [2024-11-20 13:47:26.583517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.583566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:34.742 [2024-11-20 13:47:26.586998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.587037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.742 [2024-11-20 13:47:26.587067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:27:34.742 [2024-11-20 13:47:26.587079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.587685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.587720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.742 [2024-11-20 13:47:26.587739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:27:34.742 [2024-11-20 13:47:26.587751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.591492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.591531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.742 [2024-11-20 13:47:26.591550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:27:34.742 [2024-11-20 13:47:26.591562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.599159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.599199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.742 [2024-11-20 13:47:26.599218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:27:34.742 [2024-11-20 13:47:26.599230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.630948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.631037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.742 [2024-11-20 13:47:26.631068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.608 ms 00:27:34.742 [2024-11-20 13:47:26.631080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.649800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.650080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.742 [2024-11-20 13:47:26.650120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.571 ms 00:27:34.742 [2024-11-20 13:47:26.650139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.650408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.650430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.742 [2024-11-20 13:47:26.650447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:27:34.742 [2024-11-20 13:47:26.650460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.682180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.742 [2024-11-20 13:47:26.682250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:34.742 [2024-11-20 13:47:26.682275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.669 ms 00:27:34.742 [2024-11-20 13:47:26.682287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.742 [2024-11-20 13:47:26.713974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.743 [2024-11-20 13:47:26.714042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:34.743 [2024-11-20 13:47:26.714069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.538 ms 00:27:34.743 [2024-11-20 13:47:26.714082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.743 [2024-11-20 13:47:26.745173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.743 [2024-11-20 13:47:26.745411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.743 [2024-11-20 13:47:26.745448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:27:34.743 [2024-11-20 13:47:26.745461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.743 [2024-11-20 13:47:26.776827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.743 [2024-11-20 13:47:26.776948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.743 [2024-11-20 13:47:26.776977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.176 ms 00:27:34.743 [2024-11-20 13:47:26.776990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.743 [2024-11-20 13:47:26.777135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.743 [2024-11-20 13:47:26.777163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.743 [2024-11-20 13:47:26.777990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.744 [2024-11-20 13:47:26.778583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.744 [2024-11-20 13:47:26.778599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:27:34.744 [2024-11-20 13:47:26.778612] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:34.744 [2024-11-20 13:47:26.778625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:34.744 [2024-11-20 13:47:26.778637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:34.744 [2024-11-20 13:47:26.778655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:34.744 [2024-11-20 13:47:26.778667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.744 [2024-11-20 13:47:26.778695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.744 [2024-11-20 13:47:26.778707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.744 [2024-11-20 13:47:26.778719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.744 [2024-11-20 13:47:26.778730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.744 [2024-11-20 13:47:26.778744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.744 [2024-11-20 13:47:26.778756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.744 [2024-11-20 13:47:26.778772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:27:34.744 [2024-11-20 13:47:26.778784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.004 [2024-11-20 13:47:26.795776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.004 [2024-11-20 13:47:26.795841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:35.004 [2024-11-20 13:47:26.795887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.935 ms 00:27:35.004 [2024-11-20 13:47:26.795923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.004 [2024-11-20 13:47:26.796512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.004 [2024-11-20 13:47:26.796553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:35.004 [2024-11-20 13:47:26.796574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:27:35.004 [2024-11-20 13:47:26.796586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.004 [2024-11-20 13:47:26.855624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.004 [2024-11-20 13:47:26.855980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.005 [2024-11-20 13:47:26.856022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.005 [2024-11-20 13:47:26.856037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.005 [2024-11-20 13:47:26.856238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.005 [2024-11-20 13:47:26.856257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.005 [2024-11-20 13:47:26.856273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.005 [2024-11-20 13:47:26.856285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.005 [2024-11-20 13:47:26.856395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.005 [2024-11-20 13:47:26.856415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.005 [2024-11-20 13:47:26.856437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.005 [2024-11-20 13:47:26.856450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.005 [2024-11-20 13:47:26.856491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.005 [2024-11-20 13:47:26.856505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.005 [2024-11-20 13:47:26.856520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.005 [2024-11-20 13:47:26.856532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.005 [2024-11-20 13:47:26.968680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.005 [2024-11-20 13:47:26.968789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.005 [2024-11-20 13:47:26.968814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.005 [2024-11-20 13:47:26.968831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.055951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.264 [2024-11-20 13:47:27.056049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.264 [2024-11-20 13:47:27.056254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.264 [2024-11-20 13:47:27.056362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.264 [2024-11-20 13:47:27.056566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:35.264 [2024-11-20 13:47:27.056697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.264 [2024-11-20 13:47:27.056807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.056932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.264 [2024-11-20 13:47:27.056956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.264 [2024-11-20 13:47:27.056972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.264 [2024-11-20 13:47:27.056983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.264 [2024-11-20 13:47:27.057203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.786 ms, result 0 00:27:35.264 true 00:27:35.264 13:47:27 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78389 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78389 ']' 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78389 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78389 00:27:35.264 killing process with pid 78389 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78389' 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78389 00:27:35.264 13:47:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78389 00:27:40.528 13:47:31 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:41.488 65536+0 records in 00:27:41.488 65536+0 records out 00:27:41.488 268435456 bytes (268 MB, 256 MiB) copied, 1.26554 s, 212 MB/s 00:27:41.488 13:47:33 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:41.488 [2024-11-20 13:47:33.258451] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:27:41.488 [2024-11-20 13:47:33.258599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78594 ] 00:27:41.488 [2024-11-20 13:47:33.433327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.747 [2024-11-20 13:47:33.536414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.005 [2024-11-20 13:47:33.858706] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.005 [2024-11-20 13:47:33.858792] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.005 [2024-11-20 13:47:34.024650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.005 [2024-11-20 13:47:34.024733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:42.005 [2024-11-20 13:47:34.024755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:42.005 [2024-11-20 13:47:34.024767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.005 [2024-11-20 13:47:34.028588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.005 [2024-11-20 13:47:34.028822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.005 [2024-11-20 13:47:34.028854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:27:42.005 [2024-11-20 13:47:34.028892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.005 [2024-11-20 13:47:34.029122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:42.005 [2024-11-20 13:47:34.030163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:42.005 [2024-11-20 13:47:34.030211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.005 [2024-11-20 13:47:34.030226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.005 [2024-11-20 13:47:34.030239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:27:42.005 [2024-11-20 13:47:34.030250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.005 [2024-11-20 13:47:34.031571] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:42.264 [2024-11-20 13:47:34.048759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.048886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:42.264 [2024-11-20 13:47:34.048910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.184 ms 00:27:42.264 [2024-11-20 13:47:34.048923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.049139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.049173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:42.264 [2024-11-20 13:47:34.049196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:42.264 [2024-11-20 13:47:34.049222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.054214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.054286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.264 [2024-11-20 13:47:34.054305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:27:42.264 [2024-11-20 13:47:34.054317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.054490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.054512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.264 [2024-11-20 13:47:34.054526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:42.264 [2024-11-20 13:47:34.054537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.054583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.054600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:42.264 [2024-11-20 13:47:34.054612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:42.264 [2024-11-20 13:47:34.054623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.054686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:42.264 [2024-11-20 13:47:34.059142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.059190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.264 [2024-11-20 13:47:34.059207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:27:42.264 [2024-11-20 13:47:34.059219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.059337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.264 [2024-11-20 13:47:34.059358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:42.264 [2024-11-20 13:47:34.059371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:42.264 [2024-11-20 13:47:34.059382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.264 [2024-11-20 13:47:34.059421] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:42.264 [2024-11-20 13:47:34.059450] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:42.265 [2024-11-20 13:47:34.059494] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:42.265 [2024-11-20 13:47:34.059514] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:42.265 [2024-11-20 13:47:34.059630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:42.265 [2024-11-20 13:47:34.059647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:42.265 [2024-11-20 13:47:34.059663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:42.265 [2024-11-20 13:47:34.059696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:42.265 [2024-11-20 13:47:34.059709] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:42.265 [2024-11-20 13:47:34.059722] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:42.265 [2024-11-20 13:47:34.059733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:42.265 [2024-11-20 13:47:34.059743] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:42.265 [2024-11-20 13:47:34.059754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:42.265 [2024-11-20 13:47:34.059766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.265 [2024-11-20 13:47:34.059778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:42.265 [2024-11-20 13:47:34.059790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:27:42.265 [2024-11-20 13:47:34.059800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.265 [2024-11-20 13:47:34.059945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.265 [2024-11-20 13:47:34.059981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:42.265 [2024-11-20 13:47:34.059994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:27:42.265 [2024-11-20 13:47:34.060005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.265 [2024-11-20 13:47:34.060137] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:42.265 [2024-11-20 13:47:34.060157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:42.265 [2024-11-20 13:47:34.060169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:42.265 [2024-11-20 13:47:34.060202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:42.265 [2024-11-20 13:47:34.060233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.265 [2024-11-20 13:47:34.060253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:42.265 [2024-11-20 13:47:34.060264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:42.265 [2024-11-20 13:47:34.060274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.265 [2024-11-20 13:47:34.060313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:42.265 [2024-11-20 13:47:34.060325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:42.265 [2024-11-20 13:47:34.060335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:42.265 [2024-11-20 13:47:34.060356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:42.265 [2024-11-20 13:47:34.060387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:42.265 [2024-11-20 13:47:34.060418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:42.265 [2024-11-20 13:47:34.060448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:42.265 [2024-11-20 13:47:34.060478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:42.265 [2024-11-20 13:47:34.060508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.265 [2024-11-20 13:47:34.060528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:42.265 [2024-11-20 13:47:34.060538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:42.265 [2024-11-20 13:47:34.060548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.265 [2024-11-20 13:47:34.060558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:42.265 [2024-11-20 13:47:34.060568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:42.265 [2024-11-20 13:47:34.060578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:42.265 [2024-11-20 13:47:34.060598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:42.265 [2024-11-20 13:47:34.060608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:42.265 [2024-11-20 13:47:34.060629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:42.265 [2024-11-20 13:47:34.060651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.265 [2024-11-20 13:47:34.060675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:42.265 [2024-11-20 13:47:34.060685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:42.265 [2024-11-20 13:47:34.060697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:42.265 [2024-11-20 13:47:34.060707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:42.265 [2024-11-20 13:47:34.060717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:42.265 [2024-11-20 13:47:34.060729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:42.265 [2024-11-20 13:47:34.060741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:42.265 [2024-11-20 13:47:34.060756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.265 [2024-11-20 13:47:34.060769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:42.265 [2024-11-20 13:47:34.060780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:42.265 [2024-11-20 13:47:34.060791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:42.265 [2024-11-20 13:47:34.060802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:42.265 [2024-11-20 13:47:34.060813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:42.265 [2024-11-20 13:47:34.060824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:42.265 [2024-11-20 13:47:34.060836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:42.265 [2024-11-20 13:47:34.060847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:42.265 [2024-11-20 13:47:34.060858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:42.265 [2024-11-20 13:47:34.060885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:42.265 [2024-11-20 13:47:34.060898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:42.265 [2024-11-20 13:47:34.060910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:42.265 [2024-11-20 13:47:34.060921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:42.265 [2024-11-20 13:47:34.060933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:42.265 [2024-11-20 13:47:34.060944] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:42.266 [2024-11-20 13:47:34.060957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.266 [2024-11-20 13:47:34.060969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.266 [2024-11-20 13:47:34.060980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:42.266 [2024-11-20 13:47:34.060992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:42.266 [2024-11-20 13:47:34.061004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:42.266 [2024-11-20 13:47:34.061018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.061041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:42.266 [2024-11-20 13:47:34.061053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:27:42.266 [2024-11-20 13:47:34.061066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.095322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.095589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.266 [2024-11-20 13:47:34.095725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.163 ms 00:27:42.266 [2024-11-20 13:47:34.095842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.096198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.096341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:42.266 [2024-11-20 13:47:34.096464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:42.266 [2024-11-20 13:47:34.096516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.146535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.146830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.266 [2024-11-20 13:47:34.146988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.898 ms 00:27:42.266 [2024-11-20 13:47:34.147111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.147333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.147404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.266 [2024-11-20 13:47:34.147565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:42.266 [2024-11-20 13:47:34.147688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.148131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.148279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.266 [2024-11-20 13:47:34.148401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:27:42.266 [2024-11-20 13:47:34.148450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.148706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.148844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.266 [2024-11-20 13:47:34.148977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:42.266 [2024-11-20 13:47:34.149028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.166642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.166954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.266 [2024-11-20 13:47:34.167081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.504 ms 00:27:42.266 [2024-11-20 13:47:34.167133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.184312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:42.266 [2024-11-20 13:47:34.184388] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:42.266 [2024-11-20 13:47:34.184412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.184425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:42.266 [2024-11-20 13:47:34.184439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.986 ms 00:27:42.266 [2024-11-20 13:47:34.184451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.218029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.218166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:42.266 [2024-11-20 13:47:34.218230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.397 ms 00:27:42.266 [2024-11-20 13:47:34.218253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.241898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.242003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:42.266 [2024-11-20 13:47:34.242034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.400 ms 00:27:42.266 [2024-11-20 13:47:34.242056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.264756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.264885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:42.266 [2024-11-20 13:47:34.264915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.483 ms 00:27:42.266 [2024-11-20 13:47:34.264927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.266 [2024-11-20 13:47:34.265846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.266 [2024-11-20 13:47:34.265902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:42.266 [2024-11-20 13:47:34.265919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:27:42.266 [2024-11-20 13:47:34.265931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.341445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.341521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:42.525 [2024-11-20 13:47:34.341541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.477 ms 00:27:42.525 [2024-11-20 13:47:34.341553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.354594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:42.525 [2024-11-20 13:47:34.368811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.368901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:42.525 [2024-11-20 13:47:34.368923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.085 ms 00:27:42.525 [2024-11-20 13:47:34.368936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.369151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.369173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:42.525 [2024-11-20 13:47:34.369187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:42.525 [2024-11-20 13:47:34.369198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.369278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.369296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:42.525 [2024-11-20 13:47:34.369308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:42.525 [2024-11-20 13:47:34.369319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.369366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.369392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:42.525 [2024-11-20 13:47:34.369405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:42.525 [2024-11-20 13:47:34.369416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.369468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:42.525 [2024-11-20 13:47:34.369486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.369497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:42.525 [2024-11-20 13:47:34.369509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:42.525 [2024-11-20 13:47:34.369520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.404048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.404123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:42.525 [2024-11-20 13:47:34.404143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.495 ms 00:27:42.525 [2024-11-20 13:47:34.404155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.404379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.525 [2024-11-20 13:47:34.404402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:42.525 [2024-11-20 13:47:34.404415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:42.525 [2024-11-20 13:47:34.404426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.525 [2024-11-20 13:47:34.405624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:42.525 [2024-11-20 13:47:34.409983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.608 ms, result 0 00:27:42.525 [2024-11-20 13:47:34.410893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:42.525 [2024-11-20 13:47:34.427637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:43.461  [2024-11-20T13:47:36.436Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T13:47:37.812Z] Copying: 55/256 [MB] (27 MBps) [2024-11-20T13:47:38.749Z] Copying: 82/256 [MB] (27 MBps) [2024-11-20T13:47:39.715Z] Copying: 108/256 [MB] (25 MBps) [2024-11-20T13:47:40.648Z] Copying: 135/256 [MB] (27 MBps) [2024-11-20T13:47:41.582Z] Copying: 160/256 [MB] (25 MBps) [2024-11-20T13:47:42.516Z] Copying: 186/256 [MB] (25 MBps) [2024-11-20T13:47:43.450Z] Copying: 212/256 [MB] (26 MBps) [2024-11-20T13:47:44.015Z] Copying: 240/256 [MB] (27 MBps) [2024-11-20T13:47:44.015Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-20 13:47:44.004888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:52.235 [2024-11-20 13:47:44.017463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.017548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:52.235 [2024-11-20 13:47:44.017569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:52.235 [2024-11-20 13:47:44.017600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.235 [2024-11-20 13:47:44.017638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:52.235 [2024-11-20 13:47:44.021056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.021271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:52.235 [2024-11-20 13:47:44.021305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.393 ms 00:27:52.235 [2024-11-20 13:47:44.021318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.235 [2024-11-20 13:47:44.023119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.023164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:52.235 [2024-11-20 13:47:44.023182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:27:52.235 [2024-11-20 13:47:44.023194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.235 [2024-11-20 13:47:44.030310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.030413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:52.235 [2024-11-20 13:47:44.030431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.085 ms 00:27:52.235 [2024-11-20 13:47:44.030443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.235 [2024-11-20 13:47:44.038228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.038326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:52.235 [2024-11-20 13:47:44.038347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.667 ms 00:27:52.235 [2024-11-20 13:47:44.038359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.235 [2024-11-20 13:47:44.071243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.235 [2024-11-20 13:47:44.071335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:52.235 [2024-11-20 13:47:44.071355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.777 ms 00:27:52.235 [2024-11-20 13:47:44.071367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.090786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.090895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:52.236 [2024-11-20 13:47:44.090923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.258 ms 00:27:52.236 [2024-11-20 13:47:44.090936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.091180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.091201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:52.236 [2024-11-20 13:47:44.091215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:52.236 [2024-11-20 13:47:44.091226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.123757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.123827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:52.236 [2024-11-20 13:47:44.123846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.503 ms 00:27:52.236 [2024-11-20 13:47:44.123858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.155602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.155699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:52.236 [2024-11-20 13:47:44.155721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.625 ms 00:27:52.236 [2024-11-20 13:47:44.155732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.187847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.187928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:52.236 [2024-11-20 13:47:44.187949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.985 ms 00:27:52.236 [2024-11-20 13:47:44.187961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.219198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.236 [2024-11-20 13:47:44.219271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:52.236 [2024-11-20 13:47:44.219290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.080 ms 00:27:52.236 [2024-11-20 13:47:44.219302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.236 [2024-11-20 13:47:44.219411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:52.236 [2024-11-20 13:47:44.219439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.219994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:52.236 [2024-11-20 13:47:44.220227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:52.237 [2024-11-20 13:47:44.220657] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:52.237 [2024-11-20 13:47:44.220669] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:27:52.237 [2024-11-20 13:47:44.220681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:52.237 [2024-11-20 13:47:44.220692] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:52.237 [2024-11-20 13:47:44.220703] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:52.237 [2024-11-20 13:47:44.220714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:52.237 [2024-11-20 13:47:44.220725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:52.237 [2024-11-20 13:47:44.220736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:52.237 [2024-11-20 13:47:44.220747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:52.237 [2024-11-20 13:47:44.220757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:52.237 [2024-11-20 13:47:44.220766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:52.237 [2024-11-20 13:47:44.220777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.237 [2024-11-20 13:47:44.220795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:52.237 [2024-11-20 13:47:44.220807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:27:52.237 [2024-11-20 13:47:44.220818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.237 [2024-11-20 13:47:44.237956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.237 [2024-11-20 13:47:44.238250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:52.237 [2024-11-20 13:47:44.238282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.102 ms 00:27:52.237 [2024-11-20 13:47:44.238295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.237 [2024-11-20 13:47:44.238825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.237 [2024-11-20 13:47:44.238851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:52.237 [2024-11-20 13:47:44.238865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:27:52.237 [2024-11-20 13:47:44.238903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.495 [2024-11-20 13:47:44.286417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.495 [2024-11-20 13:47:44.286494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:52.495 [2024-11-20 13:47:44.286515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.495 [2024-11-20 13:47:44.286527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.495 [2024-11-20 13:47:44.286700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.495 [2024-11-20 13:47:44.286732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:52.495 [2024-11-20 13:47:44.286745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.495 [2024-11-20 13:47:44.286757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.495 [2024-11-20 13:47:44.286829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.495 [2024-11-20 13:47:44.286848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:52.495 [2024-11-20 13:47:44.286860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.495 [2024-11-20 13:47:44.286897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.495 [2024-11-20 13:47:44.286927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.495 [2024-11-20 13:47:44.286948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:52.495 [2024-11-20 13:47:44.286960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.495 [2024-11-20 13:47:44.286971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.495 [2024-11-20 13:47:44.391456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.495 [2024-11-20 13:47:44.391528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:52.495 [2024-11-20 13:47:44.391547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.391558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.476692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.476781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:52.496 [2024-11-20 13:47:44.476802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.476814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.476960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.476980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:52.496 [2024-11-20 13:47:44.476993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.477055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:52.496 [2024-11-20 13:47:44.477080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.477251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:52.496 [2024-11-20 13:47:44.477263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.477348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:52.496 [2024-11-20 13:47:44.477360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.477440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:52.496 [2024-11-20 13:47:44.477451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.496 [2024-11-20 13:47:44.477530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:52.496 [2024-11-20 13:47:44.477547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.496 [2024-11-20 13:47:44.477557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.496 [2024-11-20 13:47:44.477739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.285 ms, result 0 00:27:53.429 00:27:53.429 00:27:53.429 13:47:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78719 00:27:53.429 13:47:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:53.429 13:47:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78719 00:27:53.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78719 ']' 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.429 13:47:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:53.686 [2024-11-20 13:47:45.535990] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:27:53.686 [2024-11-20 13:47:45.536146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78719 ] 00:27:53.686 [2024-11-20 13:47:45.711005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.944 [2024-11-20 13:47:45.874505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.878 13:47:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.878 13:47:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:54.878 13:47:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:55.136 [2024-11-20 13:47:46.989015] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.136 [2024-11-20 13:47:46.989093] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.396 [2024-11-20 13:47:47.175091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.175371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:55.396 [2024-11-20 13:47:47.175418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:55.396 [2024-11-20 13:47:47.175434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.180232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.180291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.396 [2024-11-20 13:47:47.180314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:27:55.396 [2024-11-20 13:47:47.180327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.180642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:55.396 [2024-11-20 13:47:47.181640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:55.396 [2024-11-20 13:47:47.181702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.181721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.396 [2024-11-20 13:47:47.181737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:27:55.396 [2024-11-20 13:47:47.181749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.183020] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:55.396 [2024-11-20 13:47:47.199988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.200070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:55.396 [2024-11-20 13:47:47.200093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.971 ms 00:27:55.396 [2024-11-20 13:47:47.200108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.200302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.200329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:55.396 [2024-11-20 13:47:47.200344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:55.396 [2024-11-20 13:47:47.200357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.205373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.205469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.396 [2024-11-20 13:47:47.205490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.938 ms 00:27:55.396 [2024-11-20 13:47:47.205504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.205702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.205730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.396 [2024-11-20 13:47:47.205744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:55.396 [2024-11-20 13:47:47.205759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.205808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.205832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:55.396 [2024-11-20 13:47:47.205854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:55.396 [2024-11-20 13:47:47.205908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.205952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:55.396 [2024-11-20 13:47:47.210734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.210986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.396 [2024-11-20 13:47:47.211024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:27:55.396 [2024-11-20 13:47:47.211038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.211218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.211240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:55.396 [2024-11-20 13:47:47.211257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:55.396 [2024-11-20 13:47:47.211279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.211336] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:55.396 [2024-11-20 13:47:47.211383] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:55.396 [2024-11-20 13:47:47.211471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:55.396 [2024-11-20 13:47:47.211513] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:55.396 [2024-11-20 13:47:47.211672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:55.396 [2024-11-20 13:47:47.211707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:55.396 [2024-11-20 13:47:47.211751] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:55.396 [2024-11-20 13:47:47.211777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:55.396 [2024-11-20 13:47:47.211806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:55.396 [2024-11-20 13:47:47.211828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:55.396 [2024-11-20 13:47:47.211852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:55.396 [2024-11-20 13:47:47.211906] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:55.396 [2024-11-20 13:47:47.211937] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:55.396 [2024-11-20 13:47:47.211963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.211990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:55.396 [2024-11-20 13:47:47.212015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:27:55.396 [2024-11-20 13:47:47.212045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.212198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.396 [2024-11-20 13:47:47.212237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:55.396 [2024-11-20 13:47:47.212259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:55.396 [2024-11-20 13:47:47.212282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.396 [2024-11-20 13:47:47.212515] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:55.396 [2024-11-20 13:47:47.212548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:55.396 [2024-11-20 13:47:47.212571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.396 [2024-11-20 13:47:47.212596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.396 [2024-11-20 13:47:47.212617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:55.396 [2024-11-20 13:47:47.212641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:55.396 [2024-11-20 13:47:47.212662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:55.396 [2024-11-20 13:47:47.212692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:55.396 [2024-11-20 13:47:47.212714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:55.396 [2024-11-20 13:47:47.212737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.396 [2024-11-20 13:47:47.212755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:55.396 [2024-11-20 13:47:47.212776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:55.396 [2024-11-20 13:47:47.212792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.396 [2024-11-20 13:47:47.212814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:55.396 [2024-11-20 13:47:47.212834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:55.396 [2024-11-20 13:47:47.212860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.396 [2024-11-20 13:47:47.212907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:55.396 [2024-11-20 13:47:47.212933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:55.396 [2024-11-20 13:47:47.212954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.396 [2024-11-20 13:47:47.212979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:55.396 [2024-11-20 13:47:47.213019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:55.396 [2024-11-20 13:47:47.213047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.396 [2024-11-20 13:47:47.213069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:55.396 [2024-11-20 13:47:47.213096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:55.396 [2024-11-20 13:47:47.213119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.396 [2024-11-20 13:47:47.213142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:55.396 [2024-11-20 13:47:47.213163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:55.396 [2024-11-20 13:47:47.213187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.396 [2024-11-20 13:47:47.213206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:55.396 [2024-11-20 13:47:47.213226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:55.396 [2024-11-20 13:47:47.213242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.396 [2024-11-20 13:47:47.213264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:55.396 [2024-11-20 13:47:47.213282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:55.396 [2024-11-20 13:47:47.213310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.396 [2024-11-20 13:47:47.213330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:55.397 [2024-11-20 13:47:47.213351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:55.397 [2024-11-20 13:47:47.213368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.397 [2024-11-20 13:47:47.213387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:55.397 [2024-11-20 13:47:47.213406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:55.397 [2024-11-20 13:47:47.213433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.397 [2024-11-20 13:47:47.213453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:55.397 [2024-11-20 13:47:47.213474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:55.397 [2024-11-20 13:47:47.213496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.397 [2024-11-20 13:47:47.213518] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:55.397 [2024-11-20 13:47:47.213543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:55.397 [2024-11-20 13:47:47.213568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.397 [2024-11-20 13:47:47.213589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.397 [2024-11-20 13:47:47.213614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:55.397 [2024-11-20 13:47:47.213637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:55.397 [2024-11-20 13:47:47.213663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:55.397 [2024-11-20 13:47:47.213684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:55.397 [2024-11-20 13:47:47.213709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:55.397 [2024-11-20 13:47:47.213732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:55.397 [2024-11-20 13:47:47.213762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:55.397 [2024-11-20 13:47:47.213789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.213822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:55.397 [2024-11-20 13:47:47.213847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:55.397 [2024-11-20 13:47:47.213899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:55.397 [2024-11-20 13:47:47.213928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:55.397 [2024-11-20 13:47:47.213955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:55.397 [2024-11-20 13:47:47.213989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:55.397 [2024-11-20 13:47:47.214015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:55.397 [2024-11-20 13:47:47.214038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:55.397 [2024-11-20 13:47:47.214064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:55.397 [2024-11-20 13:47:47.214087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:55.397 [2024-11-20 13:47:47.214210] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:55.397 [2024-11-20 13:47:47.214232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:55.397 [2024-11-20 13:47:47.214285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:55.397 [2024-11-20 13:47:47.214308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:55.397 [2024-11-20 13:47:47.214329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:55.397 [2024-11-20 13:47:47.214357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.214380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:55.397 [2024-11-20 13:47:47.214405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.984 ms 00:27:55.397 [2024-11-20 13:47:47.214426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.249654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.249726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.397 [2024-11-20 13:47:47.249756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.087 ms 00:27:55.397 [2024-11-20 13:47:47.249777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.250030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.250057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:55.397 [2024-11-20 13:47:47.250080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:55.397 [2024-11-20 13:47:47.250093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.294820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.295076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.397 [2024-11-20 13:47:47.295215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.681 ms 00:27:55.397 [2024-11-20 13:47:47.295273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.295525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.295595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.397 [2024-11-20 13:47:47.295737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:55.397 [2024-11-20 13:47:47.295885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.296358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.296500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.397 [2024-11-20 13:47:47.296675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:27:55.397 [2024-11-20 13:47:47.296849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.297145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.297184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.397 [2024-11-20 13:47:47.297208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:27:55.397 [2024-11-20 13:47:47.297221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.318152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.318412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.397 [2024-11-20 13:47:47.318456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.885 ms 00:27:55.397 [2024-11-20 13:47:47.318472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.336061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:55.397 [2024-11-20 13:47:47.336137] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:55.397 [2024-11-20 13:47:47.336170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.336185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:55.397 [2024-11-20 13:47:47.336206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:27:55.397 [2024-11-20 13:47:47.336220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.367004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.367093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:55.397 [2024-11-20 13:47:47.367122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.592 ms 00:27:55.397 [2024-11-20 13:47:47.367136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.383512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.383581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:55.397 [2024-11-20 13:47:47.383616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.178 ms 00:27:55.397 [2024-11-20 13:47:47.383630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.399446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.399512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:55.397 [2024-11-20 13:47:47.399539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.655 ms 00:27:55.397 [2024-11-20 13:47:47.399553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.397 [2024-11-20 13:47:47.400707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.397 [2024-11-20 13:47:47.400754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:55.397 [2024-11-20 13:47:47.400780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:27:55.397 [2024-11-20 13:47:47.400794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.489564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.489665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:55.656 [2024-11-20 13:47:47.489697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.695 ms 00:27:55.656 [2024-11-20 13:47:47.489712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.503116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:55.656 [2024-11-20 13:47:47.517760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.517895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:55.656 [2024-11-20 13:47:47.517943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.822 ms 00:27:55.656 [2024-11-20 13:47:47.517975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.518161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.518189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:55.656 [2024-11-20 13:47:47.518205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:55.656 [2024-11-20 13:47:47.518222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.518293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.518323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:55.656 [2024-11-20 13:47:47.518338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:55.656 [2024-11-20 13:47:47.518361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.518397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.518419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:55.656 [2024-11-20 13:47:47.518433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:55.656 [2024-11-20 13:47:47.518453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.518504] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:55.656 [2024-11-20 13:47:47.518541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.518556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:55.656 [2024-11-20 13:47:47.518582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:55.656 [2024-11-20 13:47:47.518594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.551028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.551113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:55.656 [2024-11-20 13:47:47.551143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.363 ms 00:27:55.656 [2024-11-20 13:47:47.551158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.551393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.656 [2024-11-20 13:47:47.551416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:55.656 [2024-11-20 13:47:47.551436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:55.656 [2024-11-20 13:47:47.551456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.656 [2024-11-20 13:47:47.552567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:55.656 [2024-11-20 13:47:47.557327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.152 ms, result 0 00:27:55.656 [2024-11-20 13:47:47.558406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:55.656 Some configs were skipped because the RPC state that can call them passed over. 00:27:55.656 13:47:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:56.223 [2024-11-20 13:47:47.957194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.223 [2024-11-20 13:47:47.957465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:56.223 [2024-11-20 13:47:47.957607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:27:56.223 [2024-11-20 13:47:47.957743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.223 [2024-11-20 13:47:47.957932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.132 ms, result 0 00:27:56.223 true 00:27:56.223 13:47:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:56.481 [2024-11-20 13:47:48.265170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-11-20 13:47:48.265411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:56.481 [2024-11-20 13:47:48.265553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:27:56.481 [2024-11-20 13:47:48.265698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-11-20 13:47:48.265902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.581 ms, result 0 00:27:56.481 true 00:27:56.481 13:47:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78719 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78719 ']' 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78719 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78719 00:27:56.481 killing process with pid 78719 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78719' 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78719 00:27:56.481 13:47:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78719 00:27:57.413 [2024-11-20 13:47:49.388503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.388588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:57.413 [2024-11-20 13:47:49.388611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.413 [2024-11-20 13:47:49.388625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.388662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:57.413 [2024-11-20 13:47:49.392176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.392230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:57.413 [2024-11-20 13:47:49.392255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:27:57.413 [2024-11-20 13:47:49.392268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.392619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.392646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:57.413 [2024-11-20 13:47:49.392664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:57.413 [2024-11-20 13:47:49.392675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.396837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.396899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:57.413 [2024-11-20 13:47:49.396925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:27:57.413 [2024-11-20 13:47:49.396938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.404568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.404637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:57.413 [2024-11-20 13:47:49.404661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.571 ms 00:27:57.413 [2024-11-20 13:47:49.404673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.417496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.417571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:57.413 [2024-11-20 13:47:49.417598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:27:57.413 [2024-11-20 13:47:49.417627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.426086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.426155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:57.413 [2024-11-20 13:47:49.426178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.381 ms 00:27:57.413 [2024-11-20 13:47:49.426190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.426367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.426390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:57.413 [2024-11-20 13:47:49.426406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:57.413 [2024-11-20 13:47:49.426417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.413 [2024-11-20 13:47:49.439458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.413 [2024-11-20 13:47:49.439729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:57.413 [2024-11-20 13:47:49.439785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:27:57.413 [2024-11-20 13:47:49.439809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.672 [2024-11-20 13:47:49.453110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.672 [2024-11-20 13:47:49.453204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:57.672 [2024-11-20 13:47:49.453239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.165 ms 00:27:57.672 [2024-11-20 13:47:49.453254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.672 [2024-11-20 13:47:49.466303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.672 [2024-11-20 13:47:49.466617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:57.672 [2024-11-20 13:47:49.466698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.927 ms 00:27:57.672 [2024-11-20 13:47:49.466721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.672 [2024-11-20 13:47:49.479420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.672 [2024-11-20 13:47:49.479492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:57.672 [2024-11-20 13:47:49.479522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.533 ms 00:27:57.672 [2024-11-20 13:47:49.479536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.672 [2024-11-20 13:47:49.479603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:57.672 [2024-11-20 13:47:49.479631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:57.672 [2024-11-20 13:47:49.479653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:57.672 [2024-11-20 13:47:49.479667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.479989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.480983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:57.673 [2024-11-20 13:47:49.481168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:57.674 [2024-11-20 13:47:49.481970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:57.674 [2024-11-20 13:47:49.482001] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:27:57.674 [2024-11-20 13:47:49.482032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:57.674 [2024-11-20 13:47:49.482059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:57.674 [2024-11-20 13:47:49.482071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:57.674 [2024-11-20 13:47:49.482088] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:57.674 [2024-11-20 13:47:49.482100] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:57.674 [2024-11-20 13:47:49.482117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:57.674 [2024-11-20 13:47:49.482130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:57.674 [2024-11-20 13:47:49.482145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:57.674 [2024-11-20 13:47:49.482156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:57.674 [2024-11-20 13:47:49.482175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.674 [2024-11-20 13:47:49.482188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:57.674 [2024-11-20 13:47:49.482207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:27:57.674 [2024-11-20 13:47:49.482220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.499473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.674 [2024-11-20 13:47:49.499549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:57.674 [2024-11-20 13:47:49.499585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.153 ms 00:27:57.674 [2024-11-20 13:47:49.499599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.500247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.674 [2024-11-20 13:47:49.500310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:57.674 [2024-11-20 13:47:49.500349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:27:57.674 [2024-11-20 13:47:49.500371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.560208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.674 [2024-11-20 13:47:49.560282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.674 [2024-11-20 13:47:49.560310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.674 [2024-11-20 13:47:49.560323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.560481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.674 [2024-11-20 13:47:49.560502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.674 [2024-11-20 13:47:49.560521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.674 [2024-11-20 13:47:49.560540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.560625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.674 [2024-11-20 13:47:49.560646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.674 [2024-11-20 13:47:49.560670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.674 [2024-11-20 13:47:49.560683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.560719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.674 [2024-11-20 13:47:49.560735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.674 [2024-11-20 13:47:49.560753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.674 [2024-11-20 13:47:49.560765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.674 [2024-11-20 13:47:49.667199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.674 [2024-11-20 13:47:49.667493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.674 [2024-11-20 13:47:49.667563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.674 [2024-11-20 13:47:49.667590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.755754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.755831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.933 [2024-11-20 13:47:49.755860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.755903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.933 [2024-11-20 13:47:49.756075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.933 [2024-11-20 13:47:49.756169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.933 [2024-11-20 13:47:49.756462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:57.933 [2024-11-20 13:47:49.756650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.933 [2024-11-20 13:47:49.756768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.756845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.933 [2024-11-20 13:47:49.756864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.933 [2024-11-20 13:47:49.756909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.933 [2024-11-20 13:47:49.756923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.933 [2024-11-20 13:47:49.757113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.575 ms, result 0 00:27:58.868 13:47:50 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:58.868 13:47:50 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:58.868 [2024-11-20 13:47:50.758808] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:27:58.868 [2024-11-20 13:47:50.759010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78783 ] 00:27:59.126 [2024-11-20 13:47:50.933640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.127 [2024-11-20 13:47:51.037918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.385 [2024-11-20 13:47:51.366999] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:59.385 [2024-11-20 13:47:51.367096] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:59.644 [2024-11-20 13:47:51.529093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.529173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:59.645 [2024-11-20 13:47:51.529195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:59.645 [2024-11-20 13:47:51.529208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.532711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.532953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:59.645 [2024-11-20 13:47:51.533002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.470 ms 00:27:59.645 [2024-11-20 13:47:51.533028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.533244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:59.645 [2024-11-20 13:47:51.534282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:59.645 [2024-11-20 13:47:51.534330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.534346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:59.645 [2024-11-20 13:47:51.534360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:27:59.645 [2024-11-20 13:47:51.534372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.535725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:59.645 [2024-11-20 13:47:51.552438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.552761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:59.645 [2024-11-20 13:47:51.552817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.711 ms 00:27:59.645 [2024-11-20 13:47:51.552845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.553082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.553108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:59.645 [2024-11-20 13:47:51.553122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:59.645 [2024-11-20 13:47:51.553135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.557978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.558049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:59.645 [2024-11-20 13:47:51.558068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:27:59.645 [2024-11-20 13:47:51.558080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.558255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.558278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:59.645 [2024-11-20 13:47:51.558293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:59.645 [2024-11-20 13:47:51.558305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.558346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.558367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:59.645 [2024-11-20 13:47:51.558380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:59.645 [2024-11-20 13:47:51.558392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.558425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:59.645 [2024-11-20 13:47:51.562857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.562919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:59.645 [2024-11-20 13:47:51.562937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:27:59.645 [2024-11-20 13:47:51.562949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.563083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.563116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:59.645 [2024-11-20 13:47:51.563136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:59.645 [2024-11-20 13:47:51.563148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.563185] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:59.645 [2024-11-20 13:47:51.563245] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:59.645 [2024-11-20 13:47:51.563316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:59.645 [2024-11-20 13:47:51.563356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:59.645 [2024-11-20 13:47:51.563518] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:59.645 [2024-11-20 13:47:51.563554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:59.645 [2024-11-20 13:47:51.563576] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:59.645 [2024-11-20 13:47:51.563592] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:59.645 [2024-11-20 13:47:51.563614] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:59.645 [2024-11-20 13:47:51.563627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:59.645 [2024-11-20 13:47:51.563639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:59.645 [2024-11-20 13:47:51.563650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:59.645 [2024-11-20 13:47:51.563661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:59.645 [2024-11-20 13:47:51.563675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.563687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:59.645 [2024-11-20 13:47:51.563699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:27:59.645 [2024-11-20 13:47:51.563711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.563822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.645 [2024-11-20 13:47:51.563844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:59.645 [2024-11-20 13:47:51.563857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:59.645 [2024-11-20 13:47:51.563901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.645 [2024-11-20 13:47:51.564027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:59.645 [2024-11-20 13:47:51.564055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:59.645 [2024-11-20 13:47:51.564074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:59.645 [2024-11-20 13:47:51.564125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:59.645 [2024-11-20 13:47:51.564201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:59.645 [2024-11-20 13:47:51.564242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:59.645 [2024-11-20 13:47:51.564254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:59.645 [2024-11-20 13:47:51.564267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:59.645 [2024-11-20 13:47:51.564310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:59.645 [2024-11-20 13:47:51.564335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:59.645 [2024-11-20 13:47:51.564358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:59.645 [2024-11-20 13:47:51.564401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:59.645 [2024-11-20 13:47:51.564461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:59.645 [2024-11-20 13:47:51.564542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:59.645 [2024-11-20 13:47:51.564586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:59.645 [2024-11-20 13:47:51.564643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:59.645 [2024-11-20 13:47:51.564693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:59.645 [2024-11-20 13:47:51.564715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:59.645 [2024-11-20 13:47:51.564735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:59.645 [2024-11-20 13:47:51.564758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:59.645 [2024-11-20 13:47:51.564780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:59.645 [2024-11-20 13:47:51.564811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:59.645 [2024-11-20 13:47:51.564835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:59.645 [2024-11-20 13:47:51.564859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:59.645 [2024-11-20 13:47:51.564915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.646 [2024-11-20 13:47:51.564939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:59.646 [2024-11-20 13:47:51.564960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:59.646 [2024-11-20 13:47:51.564982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.646 [2024-11-20 13:47:51.565003] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:59.646 [2024-11-20 13:47:51.565027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:59.646 [2024-11-20 13:47:51.565048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:59.646 [2024-11-20 13:47:51.565069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:59.646 [2024-11-20 13:47:51.565082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:59.646 [2024-11-20 13:47:51.565099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:59.646 [2024-11-20 13:47:51.565122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:59.646 [2024-11-20 13:47:51.565143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:59.646 [2024-11-20 13:47:51.565163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:59.646 [2024-11-20 13:47:51.565185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:59.646 [2024-11-20 13:47:51.565207] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:59.646 [2024-11-20 13:47:51.565225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:59.646 [2024-11-20 13:47:51.565278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:59.646 [2024-11-20 13:47:51.565310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:59.646 [2024-11-20 13:47:51.565334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:59.646 [2024-11-20 13:47:51.565368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:59.646 [2024-11-20 13:47:51.565393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:59.646 [2024-11-20 13:47:51.565416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:59.646 [2024-11-20 13:47:51.565438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:59.646 [2024-11-20 13:47:51.565458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:59.646 [2024-11-20 13:47:51.565481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:59.646 [2024-11-20 13:47:51.565600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:59.646 [2024-11-20 13:47:51.565623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:59.646 [2024-11-20 13:47:51.565650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:59.646 [2024-11-20 13:47:51.565661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:59.646 [2024-11-20 13:47:51.565674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:59.646 [2024-11-20 13:47:51.565688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.565701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:59.646 [2024-11-20 13:47:51.565721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:27:59.646 [2024-11-20 13:47:51.565733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.599335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.599655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:59.646 [2024-11-20 13:47:51.599847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.517 ms 00:27:59.646 [2024-11-20 13:47:51.600067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.600470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.600641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:59.646 [2024-11-20 13:47:51.600824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:59.646 [2024-11-20 13:47:51.601005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.649236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.649526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:59.646 [2024-11-20 13:47:51.649738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.031 ms 00:27:59.646 [2024-11-20 13:47:51.649954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.650159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.650193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.646 [2024-11-20 13:47:51.650210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:59.646 [2024-11-20 13:47:51.650222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.650573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.650616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.646 [2024-11-20 13:47:51.650645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:27:59.646 [2024-11-20 13:47:51.650706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.650973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.651004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.646 [2024-11-20 13:47:51.651024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:27:59.646 [2024-11-20 13:47:51.651045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.646 [2024-11-20 13:47:51.668330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.646 [2024-11-20 13:47:51.668402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.646 [2024-11-20 13:47:51.668423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.230 ms 00:27:59.646 [2024-11-20 13:47:51.668436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.685209] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:59.905 [2024-11-20 13:47:51.685278] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:59.905 [2024-11-20 13:47:51.685300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.685314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:59.905 [2024-11-20 13:47:51.685329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.675 ms 00:27:59.905 [2024-11-20 13:47:51.685341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.715985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.716094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:59.905 [2024-11-20 13:47:51.716115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.485 ms 00:27:59.905 [2024-11-20 13:47:51.716128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.732537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.732612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:59.905 [2024-11-20 13:47:51.732632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.220 ms 00:27:59.905 [2024-11-20 13:47:51.732645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.748660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.748728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:59.905 [2024-11-20 13:47:51.748749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.865 ms 00:27:59.905 [2024-11-20 13:47:51.748762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.749717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.749761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:59.905 [2024-11-20 13:47:51.749779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:27:59.905 [2024-11-20 13:47:51.749792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.824461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.824542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:59.905 [2024-11-20 13:47:51.824564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.627 ms 00:27:59.905 [2024-11-20 13:47:51.824577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.837773] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:59.905 [2024-11-20 13:47:51.852583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.852669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:59.905 [2024-11-20 13:47:51.852691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.817 ms 00:27:59.905 [2024-11-20 13:47:51.852715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.852897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.852921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:59.905 [2024-11-20 13:47:51.852936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:59.905 [2024-11-20 13:47:51.852948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.853021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.853038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:59.905 [2024-11-20 13:47:51.853051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:59.905 [2024-11-20 13:47:51.853063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.853109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.853125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:59.905 [2024-11-20 13:47:51.853137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:59.905 [2024-11-20 13:47:51.853150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.853222] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:59.905 [2024-11-20 13:47:51.853248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.853268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:59.905 [2024-11-20 13:47:51.853291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:59.905 [2024-11-20 13:47:51.853314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.885221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.885299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:59.905 [2024-11-20 13:47:51.885322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.846 ms 00:27:59.905 [2024-11-20 13:47:51.885335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.885537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.905 [2024-11-20 13:47:51.885559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:59.905 [2024-11-20 13:47:51.885573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:59.905 [2024-11-20 13:47:51.885586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.905 [2024-11-20 13:47:51.886689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:59.905 [2024-11-20 13:47:51.891193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.171 ms, result 0 00:27:59.905 [2024-11-20 13:47:51.892145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.905 [2024-11-20 13:47:51.909000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.279  [2024-11-20T13:47:54.252Z] Copying: 28/256 [MB] (28 MBps) [2024-11-20T13:47:55.238Z] Copying: 55/256 [MB] (26 MBps) [2024-11-20T13:47:56.173Z] Copying: 81/256 [MB] (26 MBps) [2024-11-20T13:47:57.107Z] Copying: 107/256 [MB] (26 MBps) [2024-11-20T13:47:58.040Z] Copying: 131/256 [MB] (24 MBps) [2024-11-20T13:47:58.975Z] Copying: 155/256 [MB] (24 MBps) [2024-11-20T13:48:00.344Z] Copying: 178/256 [MB] (23 MBps) [2024-11-20T13:48:01.275Z] Copying: 202/256 [MB] (23 MBps) [2024-11-20T13:48:02.209Z] Copying: 222/256 [MB] (19 MBps) [2024-11-20T13:48:02.467Z] Copying: 243/256 [MB] (21 MBps) [2024-11-20T13:48:02.467Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 13:48:02.447579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:10.428 [2024-11-20 13:48:02.465547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.428 [2024-11-20 13:48:02.465647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:10.428 [2024-11-20 13:48:02.465677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:10.428 [2024-11-20 13:48:02.465734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.428 [2024-11-20 13:48:02.465799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:10.687 [2024-11-20 13:48:02.470519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.687 [2024-11-20 13:48:02.470583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:10.687 [2024-11-20 13:48:02.470608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:28:10.687 [2024-11-20 13:48:02.470626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.687 [2024-11-20 13:48:02.471437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.687 [2024-11-20 13:48:02.471653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:10.687 [2024-11-20 13:48:02.471690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:28:10.687 [2024-11-20 13:48:02.471709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.687 [2024-11-20 13:48:02.480225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.687 [2024-11-20 13:48:02.480312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:10.687 [2024-11-20 13:48:02.480331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.462 ms 00:28:10.687 [2024-11-20 13:48:02.480344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.687 [2024-11-20 13:48:02.487961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.687 [2024-11-20 13:48:02.488019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:10.687 [2024-11-20 13:48:02.488036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:28:10.688 [2024-11-20 13:48:02.488049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.520336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.520418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:10.688 [2024-11-20 13:48:02.520440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.195 ms 00:28:10.688 [2024-11-20 13:48:02.520453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.539213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.539329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:10.688 [2024-11-20 13:48:02.539363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.645 ms 00:28:10.688 [2024-11-20 13:48:02.539376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.539605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.539627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:10.688 [2024-11-20 13:48:02.539642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:10.688 [2024-11-20 13:48:02.539654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.573676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.573774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:10.688 [2024-11-20 13:48:02.573798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.965 ms 00:28:10.688 [2024-11-20 13:48:02.573812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.608333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.608473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:10.688 [2024-11-20 13:48:02.608498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.380 ms 00:28:10.688 [2024-11-20 13:48:02.608510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.641718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.641806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:10.688 [2024-11-20 13:48:02.641828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.105 ms 00:28:10.688 [2024-11-20 13:48:02.641841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.674690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.688 [2024-11-20 13:48:02.675019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:10.688 [2024-11-20 13:48:02.675053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.665 ms 00:28:10.688 [2024-11-20 13:48:02.675067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.688 [2024-11-20 13:48:02.675163] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.688 [2024-11-20 13:48:02.675188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.688 [2024-11-20 13:48:02.675971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.675984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.689 [2024-11-20 13:48:02.676569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.689 [2024-11-20 13:48:02.676581] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:28:10.689 [2024-11-20 13:48:02.676594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:10.689 [2024-11-20 13:48:02.676605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:10.689 [2024-11-20 13:48:02.676616] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:10.689 [2024-11-20 13:48:02.676628] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:10.689 [2024-11-20 13:48:02.676640] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.689 [2024-11-20 13:48:02.676651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.689 [2024-11-20 13:48:02.676662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.689 [2024-11-20 13:48:02.676673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.689 [2024-11-20 13:48:02.676683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.689 [2024-11-20 13:48:02.676696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.689 [2024-11-20 13:48:02.676731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.689 [2024-11-20 13:48:02.676755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:28:10.689 [2024-11-20 13:48:02.676775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.689 [2024-11-20 13:48:02.693841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.689 [2024-11-20 13:48:02.693932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.689 [2024-11-20 13:48:02.693953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.029 ms 00:28:10.689 [2024-11-20 13:48:02.693966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.689 [2024-11-20 13:48:02.694486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.689 [2024-11-20 13:48:02.694521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.689 [2024-11-20 13:48:02.694538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:10.689 [2024-11-20 13:48:02.694550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.742355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.742435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.948 [2024-11-20 13:48:02.742457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.742469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.742626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.742646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.948 [2024-11-20 13:48:02.742660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.742672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.742760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.742780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.948 [2024-11-20 13:48:02.742794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.742807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.742833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.742861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.948 [2024-11-20 13:48:02.742907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.742921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.848140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.848225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.948 [2024-11-20 13:48:02.848246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.848259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.935902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.948 [2024-11-20 13:48:02.935982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.948 [2024-11-20 13:48:02.936003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.948 [2024-11-20 13:48:02.936016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 13:48:02.936118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.949 [2024-11-20 13:48:02.936150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.949 [2024-11-20 13:48:02.936237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.949 [2024-11-20 13:48:02.936420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:10.949 [2024-11-20 13:48:02.936517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.949 [2024-11-20 13:48:02.936625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.949 [2024-11-20 13:48:02.936712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.949 [2024-11-20 13:48:02.936731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.949 [2024-11-20 13:48:02.936743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 13:48:02.936947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 471.414 ms, result 0 00:28:11.912 00:28:11.912 00:28:11.912 13:48:03 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:28:11.912 13:48:03 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:12.520 13:48:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:12.779 [2024-11-20 13:48:04.570533] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:28:12.779 [2024-11-20 13:48:04.570758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78927 ] 00:28:12.779 [2024-11-20 13:48:04.746966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.037 [2024-11-20 13:48:04.861056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.295 [2024-11-20 13:48:05.271381] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.295 [2024-11-20 13:48:05.271477] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.555 [2024-11-20 13:48:05.433678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.555 [2024-11-20 13:48:05.433756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.555 [2024-11-20 13:48:05.433778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.555 [2024-11-20 13:48:05.433790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.555 [2024-11-20 13:48:05.437179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.555 [2024-11-20 13:48:05.437231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.555 [2024-11-20 13:48:05.437249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:28:13.555 [2024-11-20 13:48:05.437261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.555 [2024-11-20 13:48:05.437462] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.555 [2024-11-20 13:48:05.438428] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.555 [2024-11-20 13:48:05.438472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.555 [2024-11-20 13:48:05.438488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.555 [2024-11-20 13:48:05.438501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:28:13.555 [2024-11-20 13:48:05.438514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.555 [2024-11-20 13:48:05.439793] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:13.555 [2024-11-20 13:48:05.456465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.555 [2024-11-20 13:48:05.456554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:13.556 [2024-11-20 13:48:05.456576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.668 ms 00:28:13.556 [2024-11-20 13:48:05.456589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.456787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.456811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:13.556 [2024-11-20 13:48:05.456825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:13.556 [2024-11-20 13:48:05.456837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.461556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.461627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.556 [2024-11-20 13:48:05.461646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:28:13.556 [2024-11-20 13:48:05.461659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.461830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.461889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.556 [2024-11-20 13:48:05.461907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:13.556 [2024-11-20 13:48:05.461919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.461963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.461985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:13.556 [2024-11-20 13:48:05.461998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:13.556 [2024-11-20 13:48:05.462009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.462042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:13.556 [2024-11-20 13:48:05.466434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.466494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.556 [2024-11-20 13:48:05.466513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:28:13.556 [2024-11-20 13:48:05.466525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.466630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.466649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:13.556 [2024-11-20 13:48:05.466662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:13.556 [2024-11-20 13:48:05.466674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.466765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:13.556 [2024-11-20 13:48:05.466803] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:13.556 [2024-11-20 13:48:05.466849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:13.556 [2024-11-20 13:48:05.466888] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:13.556 [2024-11-20 13:48:05.467015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:13.556 [2024-11-20 13:48:05.467033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:13.556 [2024-11-20 13:48:05.467049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:13.556 [2024-11-20 13:48:05.467070] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467100] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:13.556 [2024-11-20 13:48:05.467143] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:13.556 [2024-11-20 13:48:05.467159] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:13.556 [2024-11-20 13:48:05.467171] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:13.556 [2024-11-20 13:48:05.467185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.467199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:13.556 [2024-11-20 13:48:05.467221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:28:13.556 [2024-11-20 13:48:05.467239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.467361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.556 [2024-11-20 13:48:05.467398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:13.556 [2024-11-20 13:48:05.467416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:13.556 [2024-11-20 13:48:05.467431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.556 [2024-11-20 13:48:05.467583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:13.556 [2024-11-20 13:48:05.467607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:13.556 [2024-11-20 13:48:05.467620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:13.556 [2024-11-20 13:48:05.467656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:13.556 [2024-11-20 13:48:05.467691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.556 [2024-11-20 13:48:05.467713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:13.556 [2024-11-20 13:48:05.467724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:13.556 [2024-11-20 13:48:05.467734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.556 [2024-11-20 13:48:05.467762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:13.556 [2024-11-20 13:48:05.467773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:13.556 [2024-11-20 13:48:05.467785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:13.556 [2024-11-20 13:48:05.467807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:13.556 [2024-11-20 13:48:05.467840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:13.556 [2024-11-20 13:48:05.467901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:13.556 [2024-11-20 13:48:05.467937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:13.556 [2024-11-20 13:48:05.467970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:13.556 [2024-11-20 13:48:05.467980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.556 [2024-11-20 13:48:05.467992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:13.556 [2024-11-20 13:48:05.468003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:13.556 [2024-11-20 13:48:05.468014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.556 [2024-11-20 13:48:05.468025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:13.556 [2024-11-20 13:48:05.468035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:13.556 [2024-11-20 13:48:05.468047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.556 [2024-11-20 13:48:05.468065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:13.556 [2024-11-20 13:48:05.468078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:13.556 [2024-11-20 13:48:05.468095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.468114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:13.556 [2024-11-20 13:48:05.468142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:13.556 [2024-11-20 13:48:05.468156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.468167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:13.556 [2024-11-20 13:48:05.468179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:13.556 [2024-11-20 13:48:05.468191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.556 [2024-11-20 13:48:05.468215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.556 [2024-11-20 13:48:05.468229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:13.556 [2024-11-20 13:48:05.468245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:13.556 [2024-11-20 13:48:05.468266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:13.556 [2024-11-20 13:48:05.468293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:13.556 [2024-11-20 13:48:05.468306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:13.556 [2024-11-20 13:48:05.468317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:13.556 [2024-11-20 13:48:05.468331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:13.556 [2024-11-20 13:48:05.468345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:13.557 [2024-11-20 13:48:05.468377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:13.557 [2024-11-20 13:48:05.468392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:13.557 [2024-11-20 13:48:05.468413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:13.557 [2024-11-20 13:48:05.468433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:13.557 [2024-11-20 13:48:05.468446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:13.557 [2024-11-20 13:48:05.468457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:13.557 [2024-11-20 13:48:05.468469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:13.557 [2024-11-20 13:48:05.468480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:13.557 [2024-11-20 13:48:05.468493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:13.557 [2024-11-20 13:48:05.468583] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:13.557 [2024-11-20 13:48:05.468597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.557 [2024-11-20 13:48:05.468622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:13.557 [2024-11-20 13:48:05.468635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:13.557 [2024-11-20 13:48:05.468654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:13.557 [2024-11-20 13:48:05.468676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.468705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:13.557 [2024-11-20 13:48:05.468727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:28:13.557 [2024-11-20 13:48:05.468739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.502172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.502489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:13.557 [2024-11-20 13:48:05.502640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.343 ms 00:28:13.557 [2024-11-20 13:48:05.502719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.503059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.503235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:13.557 [2024-11-20 13:48:05.503395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:13.557 [2024-11-20 13:48:05.503550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.558666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.558984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:13.557 [2024-11-20 13:48:05.559112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.929 ms 00:28:13.557 [2024-11-20 13:48:05.559230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.559447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.559584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:13.557 [2024-11-20 13:48:05.559751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.557 [2024-11-20 13:48:05.559882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.560299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.560436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:13.557 [2024-11-20 13:48:05.560594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:28:13.557 [2024-11-20 13:48:05.560756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.560995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.561132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:13.557 [2024-11-20 13:48:05.561278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:28:13.557 [2024-11-20 13:48:05.561303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.557 [2024-11-20 13:48:05.578643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.557 [2024-11-20 13:48:05.578742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:13.557 [2024-11-20 13:48:05.578771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.296 ms 00:28:13.557 [2024-11-20 13:48:05.578785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.595727] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:13.817 [2024-11-20 13:48:05.596043] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:13.817 [2024-11-20 13:48:05.596074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.596089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:13.817 [2024-11-20 13:48:05.596106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.070 ms 00:28:13.817 [2024-11-20 13:48:05.596119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.627140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.627471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:13.817 [2024-11-20 13:48:05.627505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.822 ms 00:28:13.817 [2024-11-20 13:48:05.627518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.645024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.645115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:13.817 [2024-11-20 13:48:05.645137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.304 ms 00:28:13.817 [2024-11-20 13:48:05.645150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.661592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.661682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:13.817 [2024-11-20 13:48:05.661703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.260 ms 00:28:13.817 [2024-11-20 13:48:05.661716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.662667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.662861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:13.817 [2024-11-20 13:48:05.662906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:28:13.817 [2024-11-20 13:48:05.662920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.740178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.740271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:13.817 [2024-11-20 13:48:05.740317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.209 ms 00:28:13.817 [2024-11-20 13:48:05.740330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.753579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:13.817 [2024-11-20 13:48:05.768132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.768218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:13.817 [2024-11-20 13:48:05.768240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.610 ms 00:28:13.817 [2024-11-20 13:48:05.768263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.768444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.768465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:13.817 [2024-11-20 13:48:05.768480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:13.817 [2024-11-20 13:48:05.768492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.768564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.768582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:13.817 [2024-11-20 13:48:05.768596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:13.817 [2024-11-20 13:48:05.768607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.768652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.768668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:13.817 [2024-11-20 13:48:05.768681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:13.817 [2024-11-20 13:48:05.768692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.768735] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:13.817 [2024-11-20 13:48:05.768753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.768765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:13.817 [2024-11-20 13:48:05.768777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:13.817 [2024-11-20 13:48:05.768788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.801078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.801165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:13.817 [2024-11-20 13:48:05.801188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.251 ms 00:28:13.817 [2024-11-20 13:48:05.801201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.801436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.817 [2024-11-20 13:48:05.801459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:13.817 [2024-11-20 13:48:05.801473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:13.817 [2024-11-20 13:48:05.801485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.817 [2024-11-20 13:48:05.802578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:13.817 [2024-11-20 13:48:05.807116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.565 ms, result 0 00:28:13.817 [2024-11-20 13:48:05.808013] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:13.817 [2024-11-20 13:48:05.825438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:14.076  [2024-11-20T13:48:06.115Z] Copying: 4096/4096 [kB] (average 28 MBps)[2024-11-20 13:48:05.973173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.076 [2024-11-20 13:48:05.986092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:05.986186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:14.076 [2024-11-20 13:48:05.986227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:14.076 [2024-11-20 13:48:05.986269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:05.986324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:14.076 [2024-11-20 13:48:05.989878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:05.989938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:14.076 [2024-11-20 13:48:05.989976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:28:14.076 [2024-11-20 13:48:05.989999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:05.991583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:05.991760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:14.076 [2024-11-20 13:48:05.991801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:28:14.076 [2024-11-20 13:48:05.991826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:05.996023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:05.996203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:14.076 [2024-11-20 13:48:05.996383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:28:14.076 [2024-11-20 13:48:05.996542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:06.004567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:06.004912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:14.076 [2024-11-20 13:48:06.005064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:28:14.076 [2024-11-20 13:48:06.005215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:06.037781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:06.038127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:14.076 [2024-11-20 13:48:06.038283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.266 ms 00:28:14.076 [2024-11-20 13:48:06.038426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:06.056982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:06.057293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:14.076 [2024-11-20 13:48:06.057472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.256 ms 00:28:14.076 [2024-11-20 13:48:06.057629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:06.057972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:06.058119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:14.076 [2024-11-20 13:48:06.058273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:28:14.076 [2024-11-20 13:48:06.058434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.076 [2024-11-20 13:48:06.092036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.076 [2024-11-20 13:48:06.092317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:14.076 [2024-11-20 13:48:06.092473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.395 ms 00:28:14.076 [2024-11-20 13:48:06.092614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.335 [2024-11-20 13:48:06.125818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.335 [2024-11-20 13:48:06.126141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:14.335 [2024-11-20 13:48:06.126184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.910 ms 00:28:14.335 [2024-11-20 13:48:06.126207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.335 [2024-11-20 13:48:06.159074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.335 [2024-11-20 13:48:06.159156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:14.335 [2024-11-20 13:48:06.159189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.687 ms 00:28:14.335 [2024-11-20 13:48:06.159207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.335 [2024-11-20 13:48:06.192583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.335 [2024-11-20 13:48:06.192685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:14.336 [2024-11-20 13:48:06.192724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.157 ms 00:28:14.336 [2024-11-20 13:48:06.192747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.336 [2024-11-20 13:48:06.192927] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:14.336 [2024-11-20 13:48:06.192970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.192996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.193985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:14.336 [2024-11-20 13:48:06.194788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.194994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:14.337 [2024-11-20 13:48:06.195156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:14.337 [2024-11-20 13:48:06.195178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:28:14.337 [2024-11-20 13:48:06.195198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:14.337 [2024-11-20 13:48:06.195218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:14.337 [2024-11-20 13:48:06.195235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:14.337 [2024-11-20 13:48:06.195255] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:14.337 [2024-11-20 13:48:06.195273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:14.337 [2024-11-20 13:48:06.195292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:14.337 [2024-11-20 13:48:06.195314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:14.337 [2024-11-20 13:48:06.195332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:14.337 [2024-11-20 13:48:06.195349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:14.337 [2024-11-20 13:48:06.195370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.337 [2024-11-20 13:48:06.195401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:14.337 [2024-11-20 13:48:06.195424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:28:14.337 [2024-11-20 13:48:06.195445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.213966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.337 [2024-11-20 13:48:06.214049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:14.337 [2024-11-20 13:48:06.214090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.460 ms 00:28:14.337 [2024-11-20 13:48:06.214110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.214829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.337 [2024-11-20 13:48:06.214903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:14.337 [2024-11-20 13:48:06.214936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:28:14.337 [2024-11-20 13:48:06.214956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.262303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.337 [2024-11-20 13:48:06.262381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.337 [2024-11-20 13:48:06.262411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.337 [2024-11-20 13:48:06.262430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.262597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.337 [2024-11-20 13:48:06.262625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.337 [2024-11-20 13:48:06.262647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.337 [2024-11-20 13:48:06.262667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.262785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.337 [2024-11-20 13:48:06.262815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.337 [2024-11-20 13:48:06.262838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.337 [2024-11-20 13:48:06.262859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.262930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.337 [2024-11-20 13:48:06.262965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.337 [2024-11-20 13:48:06.262987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.337 [2024-11-20 13:48:06.263007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.337 [2024-11-20 13:48:06.370151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.337 [2024-11-20 13:48:06.370251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.337 [2024-11-20 13:48:06.370283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.337 [2024-11-20 13:48:06.370303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.460922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.461229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.596 [2024-11-20 13:48:06.461271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.461291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.461415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.461443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.596 [2024-11-20 13:48:06.461467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.461486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.461539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.461564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.596 [2024-11-20 13:48:06.461600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.461621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.461817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.461848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.596 [2024-11-20 13:48:06.461896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.461922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.462007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.462036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.596 [2024-11-20 13:48:06.462067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.462088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.462160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.462195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.596 [2024-11-20 13:48:06.462220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.462239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.462322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.596 [2024-11-20 13:48:06.462349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.596 [2024-11-20 13:48:06.462380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.596 [2024-11-20 13:48:06.462400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.596 [2024-11-20 13:48:06.462664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 476.565 ms, result 0 00:28:15.531 00:28:15.531 00:28:15.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.531 13:48:07 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78965 00:28:15.531 13:48:07 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:15.531 13:48:07 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78965 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78965 ']' 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.531 13:48:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:15.789 [2024-11-20 13:48:07.578282] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:28:15.789 [2024-11-20 13:48:07.578643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78965 ] 00:28:15.789 [2024-11-20 13:48:07.792491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.047 [2024-11-20 13:48:07.924058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.009 13:48:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.009 13:48:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:17.009 13:48:08 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:17.269 [2024-11-20 13:48:09.077202] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:17.269 [2024-11-20 13:48:09.077475] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:17.269 [2024-11-20 13:48:09.234950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.235212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:17.269 [2024-11-20 13:48:09.235352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:17.269 [2024-11-20 13:48:09.235377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.239257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.239310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.269 [2024-11-20 13:48:09.239333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.840 ms 00:28:17.269 [2024-11-20 13:48:09.239346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.239576] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:17.269 [2024-11-20 13:48:09.240534] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:17.269 [2024-11-20 13:48:09.240579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.240595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.269 [2024-11-20 13:48:09.240611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:28:17.269 [2024-11-20 13:48:09.240622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.241921] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:17.269 [2024-11-20 13:48:09.259206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.259471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:17.269 [2024-11-20 13:48:09.259503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.289 ms 00:28:17.269 [2024-11-20 13:48:09.259525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.259722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.259755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:17.269 [2024-11-20 13:48:09.259771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:17.269 [2024-11-20 13:48:09.259789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.264545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.264816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.269 [2024-11-20 13:48:09.264979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:28:17.269 [2024-11-20 13:48:09.265018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.265237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.265277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.269 [2024-11-20 13:48:09.265296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:28:17.269 [2024-11-20 13:48:09.265324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.265378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.265401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:17.269 [2024-11-20 13:48:09.265415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:17.269 [2024-11-20 13:48:09.265433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.265471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:17.269 [2024-11-20 13:48:09.269768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.269808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.269 [2024-11-20 13:48:09.269831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:28:17.269 [2024-11-20 13:48:09.269845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.269969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.269990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:17.269 [2024-11-20 13:48:09.270018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:17.269 [2024-11-20 13:48:09.270030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.270069] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:17.269 [2024-11-20 13:48:09.270100] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:17.269 [2024-11-20 13:48:09.270163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:17.269 [2024-11-20 13:48:09.270188] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:17.269 [2024-11-20 13:48:09.270310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:17.269 [2024-11-20 13:48:09.270387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:17.269 [2024-11-20 13:48:09.270417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:17.269 [2024-11-20 13:48:09.270435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:17.269 [2024-11-20 13:48:09.270456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:17.269 [2024-11-20 13:48:09.270470] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:17.269 [2024-11-20 13:48:09.270487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:17.269 [2024-11-20 13:48:09.270500] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:17.269 [2024-11-20 13:48:09.270522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:17.269 [2024-11-20 13:48:09.270537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.270555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:17.269 [2024-11-20 13:48:09.270569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:28:17.269 [2024-11-20 13:48:09.270594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.270708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.269 [2024-11-20 13:48:09.270732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:17.269 [2024-11-20 13:48:09.270746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:17.269 [2024-11-20 13:48:09.270763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.269 [2024-11-20 13:48:09.270894] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:17.269 [2024-11-20 13:48:09.270920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:17.269 [2024-11-20 13:48:09.270934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:17.270 [2024-11-20 13:48:09.270953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.270973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:17.270 [2024-11-20 13:48:09.270990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:17.270 [2024-11-20 13:48:09.271043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:17.270 [2024-11-20 13:48:09.271072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:17.270 [2024-11-20 13:48:09.271089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:17.270 [2024-11-20 13:48:09.271100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:17.270 [2024-11-20 13:48:09.271117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:17.270 [2024-11-20 13:48:09.271129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:17.270 [2024-11-20 13:48:09.271145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:17.270 [2024-11-20 13:48:09.271173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:17.270 [2024-11-20 13:48:09.271230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:17.270 [2024-11-20 13:48:09.271281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:17.270 [2024-11-20 13:48:09.271322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:17.270 [2024-11-20 13:48:09.271359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:17.270 [2024-11-20 13:48:09.271396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:17.270 [2024-11-20 13:48:09.271422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:17.270 [2024-11-20 13:48:09.271436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:17.270 [2024-11-20 13:48:09.271447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:17.270 [2024-11-20 13:48:09.271460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:17.270 [2024-11-20 13:48:09.271471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:17.270 [2024-11-20 13:48:09.271486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:17.270 [2024-11-20 13:48:09.271510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:17.270 [2024-11-20 13:48:09.271521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:17.270 [2024-11-20 13:48:09.271546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:17.270 [2024-11-20 13:48:09.271559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.270 [2024-11-20 13:48:09.271586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:17.270 [2024-11-20 13:48:09.271597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:17.270 [2024-11-20 13:48:09.271610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:17.270 [2024-11-20 13:48:09.271621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:17.270 [2024-11-20 13:48:09.271634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:17.270 [2024-11-20 13:48:09.271645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:17.270 [2024-11-20 13:48:09.271660] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:17.270 [2024-11-20 13:48:09.271675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.271693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:17.270 [2024-11-20 13:48:09.271705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:17.270 [2024-11-20 13:48:09.271721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:17.270 [2024-11-20 13:48:09.271734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:17.270 [2024-11-20 13:48:09.271747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:17.270 [2024-11-20 13:48:09.271759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:17.270 [2024-11-20 13:48:09.271773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:17.270 [2024-11-20 13:48:09.271785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:17.270 [2024-11-20 13:48:09.271799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:17.270 [2024-11-20 13:48:09.271811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.271826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.271838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.271852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.271864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:17.270 [2024-11-20 13:48:09.272284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:17.270 [2024-11-20 13:48:09.272356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.272556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:17.270 [2024-11-20 13:48:09.272692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:17.270 [2024-11-20 13:48:09.272765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:17.270 [2024-11-20 13:48:09.272887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:17.270 [2024-11-20 13:48:09.273108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.270 [2024-11-20 13:48:09.273212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:17.270 [2024-11-20 13:48:09.273319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:28:17.270 [2024-11-20 13:48:09.273370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.310895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.311135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.529 [2024-11-20 13:48:09.311270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.275 ms 00:28:17.529 [2024-11-20 13:48:09.311408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.311628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.311659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:17.529 [2024-11-20 13:48:09.311683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:17.529 [2024-11-20 13:48:09.311697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.357431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.357498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.529 [2024-11-20 13:48:09.357526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.690 ms 00:28:17.529 [2024-11-20 13:48:09.357541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.357708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.357728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.529 [2024-11-20 13:48:09.357748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:17.529 [2024-11-20 13:48:09.357761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.358315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.358460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.529 [2024-11-20 13:48:09.358604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:28:17.529 [2024-11-20 13:48:09.358665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.358953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.358985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.529 [2024-11-20 13:48:09.359007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:28:17.529 [2024-11-20 13:48:09.359021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.378652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.378718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.529 [2024-11-20 13:48:09.378748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.586 ms 00:28:17.529 [2024-11-20 13:48:09.378763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.395797] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:17.529 [2024-11-20 13:48:09.395857] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:17.529 [2024-11-20 13:48:09.396110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.396130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:17.529 [2024-11-20 13:48:09.396151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.126 ms 00:28:17.529 [2024-11-20 13:48:09.396165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.426435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.426528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:17.529 [2024-11-20 13:48:09.426558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.072 ms 00:28:17.529 [2024-11-20 13:48:09.426573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.443854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.444173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:17.529 [2024-11-20 13:48:09.444222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.046 ms 00:28:17.529 [2024-11-20 13:48:09.444238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.460153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.460332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:17.529 [2024-11-20 13:48:09.460490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:28:17.529 [2024-11-20 13:48:09.460547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.461547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.461699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:17.529 [2024-11-20 13:48:09.461831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:28:17.529 [2024-11-20 13:48:09.461982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.547229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.529 [2024-11-20 13:48:09.547305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:17.529 [2024-11-20 13:48:09.547330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.160 ms 00:28:17.529 [2024-11-20 13:48:09.547344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.529 [2024-11-20 13:48:09.560241] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:17.788 [2024-11-20 13:48:09.574123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.574214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:17.788 [2024-11-20 13:48:09.574235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.606 ms 00:28:17.788 [2024-11-20 13:48:09.574251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.574426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.574449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:17.788 [2024-11-20 13:48:09.574463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:17.788 [2024-11-20 13:48:09.574478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.574544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.574563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:17.788 [2024-11-20 13:48:09.574579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:17.788 [2024-11-20 13:48:09.574593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.574625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.574642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:17.788 [2024-11-20 13:48:09.574655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:17.788 [2024-11-20 13:48:09.574672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.574733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:17.788 [2024-11-20 13:48:09.574756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.574772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:17.788 [2024-11-20 13:48:09.574786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:17.788 [2024-11-20 13:48:09.574801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.607619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.607700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:17.788 [2024-11-20 13:48:09.607725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.766 ms 00:28:17.788 [2024-11-20 13:48:09.607738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.607970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.788 [2024-11-20 13:48:09.607992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:17.788 [2024-11-20 13:48:09.608012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:17.788 [2024-11-20 13:48:09.608024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.788 [2024-11-20 13:48:09.609085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:17.788 [2024-11-20 13:48:09.613697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.782 ms, result 0 00:28:17.788 [2024-11-20 13:48:09.615070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.788 Some configs were skipped because the RPC state that can call them passed over. 00:28:17.788 13:48:09 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:18.047 [2024-11-20 13:48:09.909315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.047 [2024-11-20 13:48:09.909390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:18.047 [2024-11-20 13:48:09.909413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:28:18.047 [2024-11-20 13:48:09.909428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.047 [2024-11-20 13:48:09.909482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.507 ms, result 0 00:28:18.047 true 00:28:18.047 13:48:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:18.306 [2024-11-20 13:48:10.209336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.306 [2024-11-20 13:48:10.209398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:18.306 [2024-11-20 13:48:10.209423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:28:18.306 [2024-11-20 13:48:10.209436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.306 [2024-11-20 13:48:10.209490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.102 ms, result 0 00:28:18.306 true 00:28:18.306 13:48:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78965 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78965 ']' 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78965 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78965 00:28:18.306 killing process with pid 78965 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78965' 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78965 00:28:18.306 13:48:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78965 00:28:19.242 [2024-11-20 13:48:11.205346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.205424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:19.242 [2024-11-20 13:48:11.205447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.242 [2024-11-20 13:48:11.205464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.205498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:19.242 [2024-11-20 13:48:11.208823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.209022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:19.242 [2024-11-20 13:48:11.209060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:28:19.242 [2024-11-20 13:48:11.209074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.209405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.209434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:19.242 [2024-11-20 13:48:11.209451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:28:19.242 [2024-11-20 13:48:11.209463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.213693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.213739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:19.242 [2024-11-20 13:48:11.213759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:28:19.242 [2024-11-20 13:48:11.213772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.221353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.221391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:19.242 [2024-11-20 13:48:11.221410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.530 ms 00:28:19.242 [2024-11-20 13:48:11.221422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.234029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.234084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:19.242 [2024-11-20 13:48:11.234110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:28:19.242 [2024-11-20 13:48:11.234136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.242533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.242589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:19.242 [2024-11-20 13:48:11.242609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.337 ms 00:28:19.242 [2024-11-20 13:48:11.242623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.242800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.242820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:19.242 [2024-11-20 13:48:11.242837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:28:19.242 [2024-11-20 13:48:11.242849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.255837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.255904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:19.242 [2024-11-20 13:48:11.255935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.936 ms 00:28:19.242 [2024-11-20 13:48:11.255948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.242 [2024-11-20 13:48:11.268513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.242 [2024-11-20 13:48:11.268565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:19.242 [2024-11-20 13:48:11.268590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.508 ms 00:28:19.242 [2024-11-20 13:48:11.268602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.501 [2024-11-20 13:48:11.281010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.501 [2024-11-20 13:48:11.281070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:19.501 [2024-11-20 13:48:11.281104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:28:19.501 [2024-11-20 13:48:11.281116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.501 [2024-11-20 13:48:11.293367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.501 [2024-11-20 13:48:11.293418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:19.501 [2024-11-20 13:48:11.293440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.159 ms 00:28:19.501 [2024-11-20 13:48:11.293452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.501 [2024-11-20 13:48:11.293502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.501 [2024-11-20 13:48:11.293526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.501 [2024-11-20 13:48:11.293689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.293991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.294986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.502 [2024-11-20 13:48:11.295008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.503 [2024-11-20 13:48:11.295024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:28:19.503 [2024-11-20 13:48:11.295054] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:19.503 [2024-11-20 13:48:11.295069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:19.503 [2024-11-20 13:48:11.295080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:19.503 [2024-11-20 13:48:11.295105] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:19.503 [2024-11-20 13:48:11.295116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.503 [2024-11-20 13:48:11.295130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.503 [2024-11-20 13:48:11.295142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.503 [2024-11-20 13:48:11.295154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.503 [2024-11-20 13:48:11.295166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.503 [2024-11-20 13:48:11.295180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.503 [2024-11-20 13:48:11.295192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.503 [2024-11-20 13:48:11.295207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:28:19.503 [2024-11-20 13:48:11.295221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.311932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.503 [2024-11-20 13:48:11.311982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.503 [2024-11-20 13:48:11.312007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.658 ms 00:28:19.503 [2024-11-20 13:48:11.312024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.312534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.503 [2024-11-20 13:48:11.312573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.503 [2024-11-20 13:48:11.312596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:19.503 [2024-11-20 13:48:11.312608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.371046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.503 [2024-11-20 13:48:11.371109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.503 [2024-11-20 13:48:11.371132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.503 [2024-11-20 13:48:11.371145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.371285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.503 [2024-11-20 13:48:11.371303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.503 [2024-11-20 13:48:11.371333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.503 [2024-11-20 13:48:11.371345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.371426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.503 [2024-11-20 13:48:11.371445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.503 [2024-11-20 13:48:11.371463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.503 [2024-11-20 13:48:11.371475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.371503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.503 [2024-11-20 13:48:11.371517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.503 [2024-11-20 13:48:11.371530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.503 [2024-11-20 13:48:11.371545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.503 [2024-11-20 13:48:11.475005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.503 [2024-11-20 13:48:11.475068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.503 [2024-11-20 13:48:11.475091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.503 [2024-11-20 13:48:11.475104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.560738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.560810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.761 [2024-11-20 13:48:11.560836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.560849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.761 [2024-11-20 13:48:11.561053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.761 [2024-11-20 13:48:11.561133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.761 [2024-11-20 13:48:11.561317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:19.761 [2024-11-20 13:48:11.561432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.761 [2024-11-20 13:48:11.561530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.761 [2024-11-20 13:48:11.561618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.761 [2024-11-20 13:48:11.561633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.761 [2024-11-20 13:48:11.561645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.761 [2024-11-20 13:48:11.561809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.443 ms, result 0 00:28:20.697 13:48:12 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:20.697 [2024-11-20 13:48:12.542273] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:28:20.697 [2024-11-20 13:48:12.542644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79025 ] 00:28:20.697 [2024-11-20 13:48:12.715644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.956 [2024-11-20 13:48:12.817682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.215 [2024-11-20 13:48:13.132841] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:21.215 [2024-11-20 13:48:13.132937] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:21.475 [2024-11-20 13:48:13.294727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.294801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:21.475 [2024-11-20 13:48:13.294821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:21.475 [2024-11-20 13:48:13.294833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.298150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.298334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.475 [2024-11-20 13:48:13.298363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:28:21.475 [2024-11-20 13:48:13.298375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.298554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:21.475 [2024-11-20 13:48:13.299525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:21.475 [2024-11-20 13:48:13.299568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.299582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.475 [2024-11-20 13:48:13.299594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:28:21.475 [2024-11-20 13:48:13.299605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.300888] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:21.475 [2024-11-20 13:48:13.317202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.317262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:21.475 [2024-11-20 13:48:13.317281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.336 ms 00:28:21.475 [2024-11-20 13:48:13.317294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.317432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.317454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:21.475 [2024-11-20 13:48:13.317467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:21.475 [2024-11-20 13:48:13.317478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.321894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.321946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.475 [2024-11-20 13:48:13.321962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:28:21.475 [2024-11-20 13:48:13.321973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.322129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.322149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.475 [2024-11-20 13:48:13.322162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:21.475 [2024-11-20 13:48:13.322173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.322216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.322237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:21.475 [2024-11-20 13:48:13.322250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:21.475 [2024-11-20 13:48:13.322261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.322293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:21.475 [2024-11-20 13:48:13.326604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.326642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.475 [2024-11-20 13:48:13.326657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:28:21.475 [2024-11-20 13:48:13.326677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.326754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.326773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:21.475 [2024-11-20 13:48:13.326785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:21.475 [2024-11-20 13:48:13.326796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.326829] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:21.475 [2024-11-20 13:48:13.326862] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:21.475 [2024-11-20 13:48:13.326929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:21.475 [2024-11-20 13:48:13.326950] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:21.475 [2024-11-20 13:48:13.327063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:21.475 [2024-11-20 13:48:13.327079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:21.475 [2024-11-20 13:48:13.327093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:21.475 [2024-11-20 13:48:13.327108] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327140] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:21.475 [2024-11-20 13:48:13.327151] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:21.475 [2024-11-20 13:48:13.327162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:21.475 [2024-11-20 13:48:13.327172] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:21.475 [2024-11-20 13:48:13.327185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.327195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:21.475 [2024-11-20 13:48:13.327217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:28:21.475 [2024-11-20 13:48:13.327228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.327332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.475 [2024-11-20 13:48:13.327359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:21.475 [2024-11-20 13:48:13.327371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:21.475 [2024-11-20 13:48:13.327382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.475 [2024-11-20 13:48:13.327528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:21.475 [2024-11-20 13:48:13.327546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:21.475 [2024-11-20 13:48:13.327558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:21.475 [2024-11-20 13:48:13.327592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:21.475 [2024-11-20 13:48:13.327623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:21.475 [2024-11-20 13:48:13.327643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:21.475 [2024-11-20 13:48:13.327654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:21.475 [2024-11-20 13:48:13.327664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:21.475 [2024-11-20 13:48:13.327688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:21.475 [2024-11-20 13:48:13.327699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:21.475 [2024-11-20 13:48:13.327710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:21.475 [2024-11-20 13:48:13.327731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:21.475 [2024-11-20 13:48:13.327761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:21.475 [2024-11-20 13:48:13.327791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:21.475 [2024-11-20 13:48:13.327822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:21.475 [2024-11-20 13:48:13.327852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.475 [2024-11-20 13:48:13.327897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:21.475 [2024-11-20 13:48:13.327909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:21.475 [2024-11-20 13:48:13.327920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:21.475 [2024-11-20 13:48:13.327930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:21.475 [2024-11-20 13:48:13.327948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:21.476 [2024-11-20 13:48:13.327958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:21.476 [2024-11-20 13:48:13.327980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:21.476 [2024-11-20 13:48:13.327990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:21.476 [2024-11-20 13:48:13.328000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.476 [2024-11-20 13:48:13.328010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:21.476 [2024-11-20 13:48:13.328021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:21.476 [2024-11-20 13:48:13.328031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.476 [2024-11-20 13:48:13.328040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:21.476 [2024-11-20 13:48:13.328052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:21.476 [2024-11-20 13:48:13.328062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:21.476 [2024-11-20 13:48:13.328078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.476 [2024-11-20 13:48:13.328090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:21.476 [2024-11-20 13:48:13.328101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:21.476 [2024-11-20 13:48:13.328111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:21.476 [2024-11-20 13:48:13.328121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:21.476 [2024-11-20 13:48:13.328131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:21.476 [2024-11-20 13:48:13.328141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:21.476 [2024-11-20 13:48:13.328153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:21.476 [2024-11-20 13:48:13.328167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:21.476 [2024-11-20 13:48:13.328191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:21.476 [2024-11-20 13:48:13.328202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:21.476 [2024-11-20 13:48:13.328213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:21.476 [2024-11-20 13:48:13.328224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:21.476 [2024-11-20 13:48:13.328235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:21.476 [2024-11-20 13:48:13.328246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:21.476 [2024-11-20 13:48:13.328257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:21.476 [2024-11-20 13:48:13.328268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:21.476 [2024-11-20 13:48:13.328280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:21.476 [2024-11-20 13:48:13.328335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:21.476 [2024-11-20 13:48:13.328348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:21.476 [2024-11-20 13:48:13.328370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:21.476 [2024-11-20 13:48:13.328382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:21.476 [2024-11-20 13:48:13.328393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:21.476 [2024-11-20 13:48:13.328405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.328416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:21.476 [2024-11-20 13:48:13.328432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:28:21.476 [2024-11-20 13:48:13.328443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.361718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.361782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.476 [2024-11-20 13:48:13.361820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.202 ms 00:28:21.476 [2024-11-20 13:48:13.361831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.362046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.362073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:21.476 [2024-11-20 13:48:13.362087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:21.476 [2024-11-20 13:48:13.362098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.410782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.410848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.476 [2024-11-20 13:48:13.410884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.650 ms 00:28:21.476 [2024-11-20 13:48:13.410905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.411089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.411110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.476 [2024-11-20 13:48:13.411124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:21.476 [2024-11-20 13:48:13.411135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.411452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.411485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.476 [2024-11-20 13:48:13.411498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:21.476 [2024-11-20 13:48:13.411517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.411677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.411696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.476 [2024-11-20 13:48:13.411708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:28:21.476 [2024-11-20 13:48:13.411720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.429061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.429126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.476 [2024-11-20 13:48:13.429148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.309 ms 00:28:21.476 [2024-11-20 13:48:13.429160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.445822] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:21.476 [2024-11-20 13:48:13.446012] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:21.476 [2024-11-20 13:48:13.446038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.446051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:21.476 [2024-11-20 13:48:13.446064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.696 ms 00:28:21.476 [2024-11-20 13:48:13.446077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.477050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.477280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:21.476 [2024-11-20 13:48:13.477312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.865 ms 00:28:21.476 [2024-11-20 13:48:13.477327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.493496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.493556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:21.476 [2024-11-20 13:48:13.493574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.012 ms 00:28:21.476 [2024-11-20 13:48:13.493585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.509420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.509495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:21.476 [2024-11-20 13:48:13.509514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.708 ms 00:28:21.476 [2024-11-20 13:48:13.509525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.476 [2024-11-20 13:48:13.510433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.476 [2024-11-20 13:48:13.510472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:21.476 [2024-11-20 13:48:13.510487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:28:21.476 [2024-11-20 13:48:13.510498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.735 [2024-11-20 13:48:13.584427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.735 [2024-11-20 13:48:13.584498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:21.735 [2024-11-20 13:48:13.584519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.894 ms 00:28:21.735 [2024-11-20 13:48:13.584531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.735 [2024-11-20 13:48:13.597438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:21.735 [2024-11-20 13:48:13.611575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.735 [2024-11-20 13:48:13.611861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:21.735 [2024-11-20 13:48:13.611908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.881 ms 00:28:21.735 [2024-11-20 13:48:13.611934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.735 [2024-11-20 13:48:13.612086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.735 [2024-11-20 13:48:13.612106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:21.735 [2024-11-20 13:48:13.612120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:21.735 [2024-11-20 13:48:13.612131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.612197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.736 [2024-11-20 13:48:13.612213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:21.736 [2024-11-20 13:48:13.612225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:21.736 [2024-11-20 13:48:13.612236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.612277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.736 [2024-11-20 13:48:13.612292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:21.736 [2024-11-20 13:48:13.612304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:21.736 [2024-11-20 13:48:13.612314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.612354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:21.736 [2024-11-20 13:48:13.612369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.736 [2024-11-20 13:48:13.612381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:21.736 [2024-11-20 13:48:13.612392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:21.736 [2024-11-20 13:48:13.612402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.643908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.736 [2024-11-20 13:48:13.643963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:21.736 [2024-11-20 13:48:13.643982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.473 ms 00:28:21.736 [2024-11-20 13:48:13.643994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.644146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.736 [2024-11-20 13:48:13.644167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:21.736 [2024-11-20 13:48:13.644180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:21.736 [2024-11-20 13:48:13.644192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.736 [2024-11-20 13:48:13.645154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:21.736 [2024-11-20 13:48:13.649274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.090 ms, result 0 00:28:21.736 [2024-11-20 13:48:13.650154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.736 [2024-11-20 13:48:13.666846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:23.109  [2024-11-20T13:48:16.085Z] Copying: 30/256 [MB] (30 MBps) [2024-11-20T13:48:17.021Z] Copying: 54/256 [MB] (23 MBps) [2024-11-20T13:48:17.966Z] Copying: 79/256 [MB] (25 MBps) [2024-11-20T13:48:18.900Z] Copying: 104/256 [MB] (24 MBps) [2024-11-20T13:48:19.837Z] Copying: 130/256 [MB] (25 MBps) [2024-11-20T13:48:20.836Z] Copying: 153/256 [MB] (23 MBps) [2024-11-20T13:48:21.769Z] Copying: 178/256 [MB] (25 MBps) [2024-11-20T13:48:23.143Z] Copying: 204/256 [MB] (25 MBps) [2024-11-20T13:48:24.078Z] Copying: 228/256 [MB] (23 MBps) [2024-11-20T13:48:24.078Z] Copying: 254/256 [MB] (26 MBps) [2024-11-20T13:48:24.078Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 13:48:24.050470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.039 [2024-11-20 13:48:24.064148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-20 13:48:24.064206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.039 [2024-11-20 13:48:24.064226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.039 [2024-11-20 13:48:24.064246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-20 13:48:24.064282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:32.039 [2024-11-20 13:48:24.067678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-20 13:48:24.067717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.039 [2024-11-20 13:48:24.067732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:28:32.039 [2024-11-20 13:48:24.067744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-20 13:48:24.068207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-20 13:48:24.068367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.039 [2024-11-20 13:48:24.068393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:32.039 [2024-11-20 13:48:24.068405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-20 13:48:24.072275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-20 13:48:24.072324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.039 [2024-11-20 13:48:24.072340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.839 ms 00:28:32.039 [2024-11-20 13:48:24.072352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.080342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.080407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.300 [2024-11-20 13:48:24.080437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.957 ms 00:28:32.300 [2024-11-20 13:48:24.080460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.112353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.112410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.300 [2024-11-20 13:48:24.112429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.772 ms 00:28:32.300 [2024-11-20 13:48:24.112441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.131473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.131746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.300 [2024-11-20 13:48:24.131787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.966 ms 00:28:32.300 [2024-11-20 13:48:24.131801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.132044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.132068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:32.300 [2024-11-20 13:48:24.132081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:32.300 [2024-11-20 13:48:24.132093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.163841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.163901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:32.300 [2024-11-20 13:48:24.163918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.708 ms 00:28:32.300 [2024-11-20 13:48:24.163930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.195397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.195474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:32.300 [2024-11-20 13:48:24.195495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.407 ms 00:28:32.300 [2024-11-20 13:48:24.195507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.227648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.227707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:32.300 [2024-11-20 13:48:24.227727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.055 ms 00:28:32.300 [2024-11-20 13:48:24.227739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.258590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.300 [2024-11-20 13:48:24.258774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:32.300 [2024-11-20 13:48:24.258802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.762 ms 00:28:32.300 [2024-11-20 13:48:24.258816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.300 [2024-11-20 13:48:24.258919] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.300 [2024-11-20 13:48:24.258946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.258960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.258972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.258985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.258997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.300 [2024-11-20 13:48:24.259108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.259993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.301 [2024-11-20 13:48:24.260102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.302 [2024-11-20 13:48:24.260114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.302 [2024-11-20 13:48:24.260126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.302 [2024-11-20 13:48:24.260137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.302 [2024-11-20 13:48:24.260149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.302 [2024-11-20 13:48:24.260169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.302 [2024-11-20 13:48:24.260180] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ae2244f-4aa0-4231-b84f-5d9369f8abc2 00:28:32.302 [2024-11-20 13:48:24.260192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:32.302 [2024-11-20 13:48:24.260203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:32.302 [2024-11-20 13:48:24.260214] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:32.302 [2024-11-20 13:48:24.260225] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:32.302 [2024-11-20 13:48:24.260235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.302 [2024-11-20 13:48:24.260246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.302 [2024-11-20 13:48:24.260257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.302 [2024-11-20 13:48:24.260267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.302 [2024-11-20 13:48:24.260277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.302 [2024-11-20 13:48:24.260288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.302 [2024-11-20 13:48:24.260304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.302 [2024-11-20 13:48:24.260317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:28:32.302 [2024-11-20 13:48:24.260328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.276914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.302 [2024-11-20 13:48:24.276955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.302 [2024-11-20 13:48:24.276972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.559 ms 00:28:32.302 [2024-11-20 13:48:24.276984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.277441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.302 [2024-11-20 13:48:24.277464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.302 [2024-11-20 13:48:24.277478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:28:32.302 [2024-11-20 13:48:24.277489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.323514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.302 [2024-11-20 13:48:24.323566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.302 [2024-11-20 13:48:24.323582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.302 [2024-11-20 13:48:24.323594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.323711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.302 [2024-11-20 13:48:24.323728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.302 [2024-11-20 13:48:24.323740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.302 [2024-11-20 13:48:24.323751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.323817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.302 [2024-11-20 13:48:24.323835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.302 [2024-11-20 13:48:24.323847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.302 [2024-11-20 13:48:24.323858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.302 [2024-11-20 13:48:24.323909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.302 [2024-11-20 13:48:24.323937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.302 [2024-11-20 13:48:24.323949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.302 [2024-11-20 13:48:24.323960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.428993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.429059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.561 [2024-11-20 13:48:24.429078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.429090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.561 [2024-11-20 13:48:24.513352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.561 [2024-11-20 13:48:24.513485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.561 [2024-11-20 13:48:24.513564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.561 [2024-11-20 13:48:24.513738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.561 [2024-11-20 13:48:24.513835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.513926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.513945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.561 [2024-11-20 13:48:24.513957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.513968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.514021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.561 [2024-11-20 13:48:24.514037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.561 [2024-11-20 13:48:24.514055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.561 [2024-11-20 13:48:24.514066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.561 [2024-11-20 13:48:24.514230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.092 ms, result 0 00:28:33.497 00:28:33.497 00:28:33.497 13:48:25 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:34.064 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:34.064 13:48:26 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:34.322 13:48:26 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78965 00:28:34.322 13:48:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78965 ']' 00:28:34.322 13:48:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78965 00:28:34.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78965) - No such process 00:28:34.322 Process with pid 78965 is not found 00:28:34.322 13:48:26 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78965 is not found' 00:28:34.322 ************************************ 00:28:34.322 END TEST ftl_trim 00:28:34.322 ************************************ 00:28:34.322 00:28:34.322 real 1m9.354s 00:28:34.322 user 1m37.366s 00:28:34.322 sys 0m7.338s 00:28:34.322 13:48:26 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.322 13:48:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:34.322 13:48:26 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:34.322 13:48:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:34.322 13:48:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.322 13:48:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:34.322 ************************************ 00:28:34.322 START TEST ftl_restore 00:28:34.322 ************************************ 00:28:34.322 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:34.322 * Looking for test storage... 00:28:34.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:34.322 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:34.322 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:34.322 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:28:34.322 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.323 13:48:26 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:28:34.323 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.323 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:34.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.323 --rc genhtml_branch_coverage=1 00:28:34.323 --rc genhtml_function_coverage=1 00:28:34.323 --rc genhtml_legend=1 00:28:34.323 --rc geninfo_all_blocks=1 00:28:34.323 --rc geninfo_unexecuted_blocks=1 00:28:34.323 00:28:34.323 ' 00:28:34.323 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:34.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.323 --rc genhtml_branch_coverage=1 00:28:34.323 --rc genhtml_function_coverage=1 00:28:34.323 --rc genhtml_legend=1 00:28:34.323 --rc geninfo_all_blocks=1 00:28:34.323 --rc geninfo_unexecuted_blocks=1 00:28:34.323 00:28:34.323 ' 00:28:34.323 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:34.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.323 --rc genhtml_branch_coverage=1 00:28:34.323 --rc genhtml_function_coverage=1 00:28:34.323 --rc genhtml_legend=1 00:28:34.323 --rc geninfo_all_blocks=1 00:28:34.323 --rc geninfo_unexecuted_blocks=1 00:28:34.323 00:28:34.323 ' 00:28:34.323 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:34.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.323 --rc genhtml_branch_coverage=1 00:28:34.323 --rc genhtml_function_coverage=1 00:28:34.323 --rc genhtml_legend=1 00:28:34.323 --rc geninfo_all_blocks=1 00:28:34.323 --rc geninfo_unexecuted_blocks=1 00:28:34.323 00:28:34.323 ' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.323 13:48:26 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.xXsMBy7Oyr 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79223 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79223 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79223 ']' 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.582 13:48:26 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:34.582 13:48:26 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:34.582 [2024-11-20 13:48:26.491405] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:28:34.582 [2024-11-20 13:48:26.491630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79223 ] 00:28:34.841 [2024-11-20 13:48:26.679036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.841 [2024-11-20 13:48:26.811651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.777 13:48:27 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.777 13:48:27 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:35.777 13:48:27 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:36.035 13:48:28 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:36.035 13:48:28 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:36.035 13:48:28 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:36.035 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:36.035 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:36.035 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:36.035 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:36.035 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:36.294 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:36.294 { 00:28:36.294 "name": "nvme0n1", 00:28:36.294 "aliases": [ 00:28:36.294 "5cdea069-a0a7-4117-b2f7-07463e0d8ef8" 00:28:36.294 ], 00:28:36.294 "product_name": "NVMe disk", 00:28:36.294 "block_size": 4096, 00:28:36.294 "num_blocks": 1310720, 00:28:36.294 "uuid": "5cdea069-a0a7-4117-b2f7-07463e0d8ef8", 00:28:36.294 "numa_id": -1, 00:28:36.294 "assigned_rate_limits": { 00:28:36.294 "rw_ios_per_sec": 0, 00:28:36.294 "rw_mbytes_per_sec": 0, 00:28:36.294 "r_mbytes_per_sec": 0, 00:28:36.294 "w_mbytes_per_sec": 0 00:28:36.294 }, 00:28:36.294 "claimed": true, 00:28:36.294 "claim_type": "read_many_write_one", 00:28:36.294 "zoned": false, 00:28:36.294 "supported_io_types": { 00:28:36.294 "read": true, 00:28:36.294 "write": true, 00:28:36.294 "unmap": true, 00:28:36.294 "flush": true, 00:28:36.294 "reset": true, 00:28:36.294 "nvme_admin": true, 00:28:36.294 "nvme_io": true, 00:28:36.294 "nvme_io_md": false, 00:28:36.294 "write_zeroes": true, 00:28:36.294 "zcopy": false, 00:28:36.294 "get_zone_info": false, 00:28:36.294 "zone_management": false, 00:28:36.294 "zone_append": false, 00:28:36.294 "compare": true, 00:28:36.294 "compare_and_write": false, 00:28:36.294 "abort": true, 00:28:36.294 "seek_hole": false, 00:28:36.294 "seek_data": false, 00:28:36.294 "copy": true, 00:28:36.294 "nvme_iov_md": false 00:28:36.294 }, 00:28:36.294 "driver_specific": { 00:28:36.294 "nvme": [ 00:28:36.294 { 00:28:36.294 "pci_address": "0000:00:11.0", 00:28:36.294 "trid": { 00:28:36.294 "trtype": "PCIe", 00:28:36.294 "traddr": "0000:00:11.0" 00:28:36.294 }, 00:28:36.294 "ctrlr_data": { 00:28:36.294 "cntlid": 0, 00:28:36.294 "vendor_id": "0x1b36", 00:28:36.294 "model_number": "QEMU NVMe Ctrl", 00:28:36.294 "serial_number": "12341", 00:28:36.294 "firmware_revision": "8.0.0", 00:28:36.294 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:36.294 "oacs": { 00:28:36.294 "security": 0, 00:28:36.294 "format": 1, 00:28:36.294 "firmware": 0, 00:28:36.294 "ns_manage": 1 00:28:36.294 }, 00:28:36.294 "multi_ctrlr": false, 00:28:36.294 "ana_reporting": false 00:28:36.294 }, 00:28:36.294 "vs": { 00:28:36.294 "nvme_version": "1.4" 00:28:36.294 }, 00:28:36.294 "ns_data": { 00:28:36.294 "id": 1, 00:28:36.294 "can_share": false 00:28:36.294 } 00:28:36.294 } 00:28:36.294 ], 00:28:36.294 "mp_policy": "active_passive" 00:28:36.294 } 00:28:36.294 } 00:28:36.294 ]' 00:28:36.294 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:36.552 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:36.552 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:36.552 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:36.552 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:36.553 13:48:28 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:28:36.553 13:48:28 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:36.553 13:48:28 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:36.553 13:48:28 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:36.553 13:48:28 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:36.553 13:48:28 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:36.811 13:48:28 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f7a082b7-001e-4ba6-a6b9-37e9e58c1d38 00:28:36.811 13:48:28 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:36.811 13:48:28 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7a082b7-001e-4ba6-a6b9-37e9e58c1d38 00:28:37.071 13:48:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:37.332 13:48:29 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=311c3ae9-f81e-42a8-b2b0-160707218a73 00:28:37.332 13:48:29 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 311c3ae9-f81e-42a8-b2b0-160707218a73 00:28:37.905 13:48:29 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:37.905 13:48:29 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:37.905 13:48:29 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:37.906 13:48:29 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:37.906 13:48:29 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:37.906 13:48:29 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:37.906 13:48:29 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:37.906 13:48:29 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:37.906 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:37.906 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:37.906 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:37.906 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:37.906 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:38.165 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.165 { 00:28:38.165 "name": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:38.165 "aliases": [ 00:28:38.165 "lvs/nvme0n1p0" 00:28:38.165 ], 00:28:38.165 "product_name": "Logical Volume", 00:28:38.165 "block_size": 4096, 00:28:38.165 "num_blocks": 26476544, 00:28:38.165 "uuid": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:38.165 "assigned_rate_limits": { 00:28:38.165 "rw_ios_per_sec": 0, 00:28:38.165 "rw_mbytes_per_sec": 0, 00:28:38.165 "r_mbytes_per_sec": 0, 00:28:38.165 "w_mbytes_per_sec": 0 00:28:38.165 }, 00:28:38.165 "claimed": false, 00:28:38.165 "zoned": false, 00:28:38.165 "supported_io_types": { 00:28:38.165 "read": true, 00:28:38.165 "write": true, 00:28:38.165 "unmap": true, 00:28:38.165 "flush": false, 00:28:38.165 "reset": true, 00:28:38.165 "nvme_admin": false, 00:28:38.165 "nvme_io": false, 00:28:38.165 "nvme_io_md": false, 00:28:38.165 "write_zeroes": true, 00:28:38.165 "zcopy": false, 00:28:38.165 "get_zone_info": false, 00:28:38.165 "zone_management": false, 00:28:38.165 "zone_append": false, 00:28:38.165 "compare": false, 00:28:38.165 "compare_and_write": false, 00:28:38.165 "abort": false, 00:28:38.165 "seek_hole": true, 00:28:38.165 "seek_data": true, 00:28:38.165 "copy": false, 00:28:38.165 "nvme_iov_md": false 00:28:38.165 }, 00:28:38.165 "driver_specific": { 00:28:38.165 "lvol": { 00:28:38.165 "lvol_store_uuid": "311c3ae9-f81e-42a8-b2b0-160707218a73", 00:28:38.165 "base_bdev": "nvme0n1", 00:28:38.165 "thin_provision": true, 00:28:38.165 "num_allocated_clusters": 0, 00:28:38.165 "snapshot": false, 00:28:38.165 "clone": false, 00:28:38.165 "esnap_clone": false 00:28:38.165 } 00:28:38.165 } 00:28:38.165 } 00:28:38.165 ]' 00:28:38.165 13:48:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.165 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.165 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.165 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:38.165 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:38.165 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:38.165 13:48:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:38.165 13:48:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:38.165 13:48:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:38.424 13:48:30 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:38.424 13:48:30 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:38.424 13:48:30 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:38.424 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:38.424 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:38.424 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:38.424 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:38.424 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:38.683 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.683 { 00:28:38.683 "name": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:38.683 "aliases": [ 00:28:38.683 "lvs/nvme0n1p0" 00:28:38.683 ], 00:28:38.683 "product_name": "Logical Volume", 00:28:38.683 "block_size": 4096, 00:28:38.683 "num_blocks": 26476544, 00:28:38.683 "uuid": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:38.683 "assigned_rate_limits": { 00:28:38.683 "rw_ios_per_sec": 0, 00:28:38.683 "rw_mbytes_per_sec": 0, 00:28:38.683 "r_mbytes_per_sec": 0, 00:28:38.683 "w_mbytes_per_sec": 0 00:28:38.683 }, 00:28:38.683 "claimed": false, 00:28:38.683 "zoned": false, 00:28:38.683 "supported_io_types": { 00:28:38.683 "read": true, 00:28:38.683 "write": true, 00:28:38.683 "unmap": true, 00:28:38.683 "flush": false, 00:28:38.683 "reset": true, 00:28:38.683 "nvme_admin": false, 00:28:38.683 "nvme_io": false, 00:28:38.683 "nvme_io_md": false, 00:28:38.683 "write_zeroes": true, 00:28:38.683 "zcopy": false, 00:28:38.683 "get_zone_info": false, 00:28:38.683 "zone_management": false, 00:28:38.683 "zone_append": false, 00:28:38.683 "compare": false, 00:28:38.683 "compare_and_write": false, 00:28:38.683 "abort": false, 00:28:38.683 "seek_hole": true, 00:28:38.683 "seek_data": true, 00:28:38.683 "copy": false, 00:28:38.683 "nvme_iov_md": false 00:28:38.683 }, 00:28:38.683 "driver_specific": { 00:28:38.683 "lvol": { 00:28:38.683 "lvol_store_uuid": "311c3ae9-f81e-42a8-b2b0-160707218a73", 00:28:38.683 "base_bdev": "nvme0n1", 00:28:38.683 "thin_provision": true, 00:28:38.683 "num_allocated_clusters": 0, 00:28:38.683 "snapshot": false, 00:28:38.683 "clone": false, 00:28:38.683 "esnap_clone": false 00:28:38.683 } 00:28:38.683 } 00:28:38.683 } 00:28:38.683 ]' 00:28:38.683 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.941 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.941 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.941 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:38.941 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:38.941 13:48:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:38.941 13:48:30 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:38.941 13:48:30 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:39.199 13:48:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:39.199 13:48:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:39.199 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:39.200 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:39.200 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:39.200 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:39.200 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8700d63a-0d1d-47d1-84e0-4ef6723c552d 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:39.459 { 00:28:39.459 "name": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:39.459 "aliases": [ 00:28:39.459 "lvs/nvme0n1p0" 00:28:39.459 ], 00:28:39.459 "product_name": "Logical Volume", 00:28:39.459 "block_size": 4096, 00:28:39.459 "num_blocks": 26476544, 00:28:39.459 "uuid": "8700d63a-0d1d-47d1-84e0-4ef6723c552d", 00:28:39.459 "assigned_rate_limits": { 00:28:39.459 "rw_ios_per_sec": 0, 00:28:39.459 "rw_mbytes_per_sec": 0, 00:28:39.459 "r_mbytes_per_sec": 0, 00:28:39.459 "w_mbytes_per_sec": 0 00:28:39.459 }, 00:28:39.459 "claimed": false, 00:28:39.459 "zoned": false, 00:28:39.459 "supported_io_types": { 00:28:39.459 "read": true, 00:28:39.459 "write": true, 00:28:39.459 "unmap": true, 00:28:39.459 "flush": false, 00:28:39.459 "reset": true, 00:28:39.459 "nvme_admin": false, 00:28:39.459 "nvme_io": false, 00:28:39.459 "nvme_io_md": false, 00:28:39.459 "write_zeroes": true, 00:28:39.459 "zcopy": false, 00:28:39.459 "get_zone_info": false, 00:28:39.459 "zone_management": false, 00:28:39.459 "zone_append": false, 00:28:39.459 "compare": false, 00:28:39.459 "compare_and_write": false, 00:28:39.459 "abort": false, 00:28:39.459 "seek_hole": true, 00:28:39.459 "seek_data": true, 00:28:39.459 "copy": false, 00:28:39.459 "nvme_iov_md": false 00:28:39.459 }, 00:28:39.459 "driver_specific": { 00:28:39.459 "lvol": { 00:28:39.459 "lvol_store_uuid": "311c3ae9-f81e-42a8-b2b0-160707218a73", 00:28:39.459 "base_bdev": "nvme0n1", 00:28:39.459 "thin_provision": true, 00:28:39.459 "num_allocated_clusters": 0, 00:28:39.459 "snapshot": false, 00:28:39.459 "clone": false, 00:28:39.459 "esnap_clone": false 00:28:39.459 } 00:28:39.459 } 00:28:39.459 } 00:28:39.459 ]' 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:39.459 13:48:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8700d63a-0d1d-47d1-84e0-4ef6723c552d --l2p_dram_limit 10' 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:39.459 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:39.459 13:48:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8700d63a-0d1d-47d1-84e0-4ef6723c552d --l2p_dram_limit 10 -c nvc0n1p0 00:28:39.718 [2024-11-20 13:48:31.741307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.741378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:39.718 [2024-11-20 13:48:31.741407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:39.718 [2024-11-20 13:48:31.741423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.741508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.741527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.718 [2024-11-20 13:48:31.741545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:39.718 [2024-11-20 13:48:31.741558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.741600] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:39.718 [2024-11-20 13:48:31.742612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:39.718 [2024-11-20 13:48:31.742671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.742702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.718 [2024-11-20 13:48:31.742722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:28:39.718 [2024-11-20 13:48:31.742736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.742848] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:28:39.718 [2024-11-20 13:48:31.743979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.744026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:39.718 [2024-11-20 13:48:31.744045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:39.718 [2024-11-20 13:48:31.744061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.749047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.749119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.718 [2024-11-20 13:48:31.749139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:28:39.718 [2024-11-20 13:48:31.749154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.749313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.749340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.718 [2024-11-20 13:48:31.749356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:39.718 [2024-11-20 13:48:31.749375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.749479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.749503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:39.718 [2024-11-20 13:48:31.749518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:39.718 [2024-11-20 13:48:31.749537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.749574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:39.718 [2024-11-20 13:48:31.754980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.755053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.718 [2024-11-20 13:48:31.755088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.409 ms 00:28:39.718 [2024-11-20 13:48:31.755109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.755181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.718 [2024-11-20 13:48:31.755208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:39.718 [2024-11-20 13:48:31.755234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:39.718 [2024-11-20 13:48:31.755255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.718 [2024-11-20 13:48:31.755332] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:39.718 [2024-11-20 13:48:31.755544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:39.977 [2024-11-20 13:48:31.755603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:39.977 [2024-11-20 13:48:31.755632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:39.977 [2024-11-20 13:48:31.755663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:39.977 [2024-11-20 13:48:31.755691] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:39.977 [2024-11-20 13:48:31.755719] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:39.977 [2024-11-20 13:48:31.755740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:39.977 [2024-11-20 13:48:31.755771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:39.977 [2024-11-20 13:48:31.755792] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:39.977 [2024-11-20 13:48:31.755816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.977 [2024-11-20 13:48:31.755837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:39.977 [2024-11-20 13:48:31.755863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:28:39.977 [2024-11-20 13:48:31.755938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.977 [2024-11-20 13:48:31.756074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.977 [2024-11-20 13:48:31.756107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:39.977 [2024-11-20 13:48:31.756150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:39.977 [2024-11-20 13:48:31.756174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.977 [2024-11-20 13:48:31.756349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:39.977 [2024-11-20 13:48:31.756388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:39.977 [2024-11-20 13:48:31.756421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.977 [2024-11-20 13:48:31.756445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:39.977 [2024-11-20 13:48:31.756494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:39.977 [2024-11-20 13:48:31.756541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:39.977 [2024-11-20 13:48:31.756567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.977 [2024-11-20 13:48:31.756611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:39.977 [2024-11-20 13:48:31.756634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:39.977 [2024-11-20 13:48:31.756659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.977 [2024-11-20 13:48:31.756679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:39.977 [2024-11-20 13:48:31.756701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:39.977 [2024-11-20 13:48:31.756720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:39.977 [2024-11-20 13:48:31.756769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:39.977 [2024-11-20 13:48:31.756791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:39.977 [2024-11-20 13:48:31.756832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.977 [2024-11-20 13:48:31.756892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:39.977 [2024-11-20 13:48:31.756921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:39.977 [2024-11-20 13:48:31.756948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.977 [2024-11-20 13:48:31.756973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:39.977 [2024-11-20 13:48:31.757011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:39.977 [2024-11-20 13:48:31.757033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.977 [2024-11-20 13:48:31.757060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:39.977 [2024-11-20 13:48:31.757083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:39.977 [2024-11-20 13:48:31.757110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.977 [2024-11-20 13:48:31.757132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:39.977 [2024-11-20 13:48:31.757158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:39.977 [2024-11-20 13:48:31.757178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.977 [2024-11-20 13:48:31.757201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:39.977 [2024-11-20 13:48:31.757220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:39.977 [2024-11-20 13:48:31.757242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.977 [2024-11-20 13:48:31.757262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:39.977 [2024-11-20 13:48:31.757287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:39.977 [2024-11-20 13:48:31.757312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.757339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:39.977 [2024-11-20 13:48:31.757364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:39.977 [2024-11-20 13:48:31.757390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.977 [2024-11-20 13:48:31.757412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:39.977 [2024-11-20 13:48:31.757445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:39.977 [2024-11-20 13:48:31.757470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.977 [2024-11-20 13:48:31.757496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.978 [2024-11-20 13:48:31.757519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:39.978 [2024-11-20 13:48:31.757546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:39.978 [2024-11-20 13:48:31.757566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:39.978 [2024-11-20 13:48:31.757598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:39.978 [2024-11-20 13:48:31.757618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:39.978 [2024-11-20 13:48:31.757645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:39.978 [2024-11-20 13:48:31.757674] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:39.978 [2024-11-20 13:48:31.757708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.757741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:39.978 [2024-11-20 13:48:31.757769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:39.978 [2024-11-20 13:48:31.757793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:39.978 [2024-11-20 13:48:31.757821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:39.978 [2024-11-20 13:48:31.757845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:39.978 [2024-11-20 13:48:31.757893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:39.978 [2024-11-20 13:48:31.757920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:39.978 [2024-11-20 13:48:31.757944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:39.978 [2024-11-20 13:48:31.757963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:39.978 [2024-11-20 13:48:31.757990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:39.978 [2024-11-20 13:48:31.758175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:39.978 [2024-11-20 13:48:31.758203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.978 [2024-11-20 13:48:31.758304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:39.978 [2024-11-20 13:48:31.758326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:39.978 [2024-11-20 13:48:31.758348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:39.978 [2024-11-20 13:48:31.758371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.978 [2024-11-20 13:48:31.758394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:39.978 [2024-11-20 13:48:31.758415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:28:39.978 [2024-11-20 13:48:31.758436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.978 [2024-11-20 13:48:31.758517] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:39.978 [2024-11-20 13:48:31.758552] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:41.894 [2024-11-20 13:48:33.729404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.729481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:41.894 [2024-11-20 13:48:33.729505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1970.899 ms 00:28:41.894 [2024-11-20 13:48:33.729522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.762532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.762602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:41.894 [2024-11-20 13:48:33.762624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.725 ms 00:28:41.894 [2024-11-20 13:48:33.762641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.762859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.762904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:41.894 [2024-11-20 13:48:33.762922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:41.894 [2024-11-20 13:48:33.762944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.803997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.804080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:41.894 [2024-11-20 13:48:33.804102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.986 ms 00:28:41.894 [2024-11-20 13:48:33.804120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.804210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.804257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:41.894 [2024-11-20 13:48:33.804276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:41.894 [2024-11-20 13:48:33.804292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.804770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.804810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:41.894 [2024-11-20 13:48:33.804828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:28:41.894 [2024-11-20 13:48:33.804844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.805002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.805030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:41.894 [2024-11-20 13:48:33.805048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:28:41.894 [2024-11-20 13:48:33.805066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.894 [2024-11-20 13:48:33.823434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.894 [2024-11-20 13:48:33.823519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:41.894 [2024-11-20 13:48:33.823541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.336 ms 00:28:41.894 [2024-11-20 13:48:33.823557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.895 [2024-11-20 13:48:33.837888] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:41.895 [2024-11-20 13:48:33.840772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.895 [2024-11-20 13:48:33.840813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:41.895 [2024-11-20 13:48:33.840837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:28:41.895 [2024-11-20 13:48:33.840851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:33.945177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:33.945264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:42.153 [2024-11-20 13:48:33.945294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.234 ms 00:28:42.153 [2024-11-20 13:48:33.945311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:33.945681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:33.945724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:42.153 [2024-11-20 13:48:33.945750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:28:42.153 [2024-11-20 13:48:33.945767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:33.984536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:33.984620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:42.153 [2024-11-20 13:48:33.984650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.612 ms 00:28:42.153 [2024-11-20 13:48:33.984667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:34.022996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:34.023090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:42.153 [2024-11-20 13:48:34.023122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.205 ms 00:28:42.153 [2024-11-20 13:48:34.023139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:34.024088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:34.024128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:42.153 [2024-11-20 13:48:34.024151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:28:42.153 [2024-11-20 13:48:34.024171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:34.122131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:34.122235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:42.153 [2024-11-20 13:48:34.122271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.842 ms 00:28:42.153 [2024-11-20 13:48:34.122288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.153 [2024-11-20 13:48:34.167829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.153 [2024-11-20 13:48:34.167920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:42.153 [2024-11-20 13:48:34.167951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.324 ms 00:28:42.153 [2024-11-20 13:48:34.167969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.411 [2024-11-20 13:48:34.207237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.411 [2024-11-20 13:48:34.207325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:42.411 [2024-11-20 13:48:34.207356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.176 ms 00:28:42.411 [2024-11-20 13:48:34.207372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.411 [2024-11-20 13:48:34.247971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.411 [2024-11-20 13:48:34.248046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:42.411 [2024-11-20 13:48:34.248074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.467 ms 00:28:42.411 [2024-11-20 13:48:34.248100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.411 [2024-11-20 13:48:34.248188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.411 [2024-11-20 13:48:34.248211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:42.411 [2024-11-20 13:48:34.248236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:42.411 [2024-11-20 13:48:34.248252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.411 [2024-11-20 13:48:34.248455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.411 [2024-11-20 13:48:34.248495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:42.411 [2024-11-20 13:48:34.248524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:42.411 [2024-11-20 13:48:34.248540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.411 [2024-11-20 13:48:34.249934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2508.017 ms, result 0 00:28:42.411 { 00:28:42.411 "name": "ftl0", 00:28:42.411 "uuid": "75407c1e-ca5b-4724-90c3-ab5917c4cf24" 00:28:42.411 } 00:28:42.411 13:48:34 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:42.411 13:48:34 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:42.669 13:48:34 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:28:42.669 13:48:34 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:42.927 [2024-11-20 13:48:34.893252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.927 [2024-11-20 13:48:34.893324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:42.927 [2024-11-20 13:48:34.893347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:42.927 [2024-11-20 13:48:34.893376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.927 [2024-11-20 13:48:34.893417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:42.927 [2024-11-20 13:48:34.896859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.927 [2024-11-20 13:48:34.896906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:42.927 [2024-11-20 13:48:34.896927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:28:42.927 [2024-11-20 13:48:34.896941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.927 [2024-11-20 13:48:34.897272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.927 [2024-11-20 13:48:34.897312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:42.927 [2024-11-20 13:48:34.897332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:42.927 [2024-11-20 13:48:34.897346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.927 [2024-11-20 13:48:34.900669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.927 [2024-11-20 13:48:34.900705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:42.927 [2024-11-20 13:48:34.900725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:28:42.927 [2024-11-20 13:48:34.900738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.927 [2024-11-20 13:48:34.907472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.928 [2024-11-20 13:48:34.907511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:42.928 [2024-11-20 13:48:34.907534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:28:42.928 [2024-11-20 13:48:34.907554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.928 [2024-11-20 13:48:34.939528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.928 [2024-11-20 13:48:34.939597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:42.928 [2024-11-20 13:48:34.939622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.877 ms 00:28:42.928 [2024-11-20 13:48:34.939636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.928 [2024-11-20 13:48:34.958466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.928 [2024-11-20 13:48:34.958551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:42.928 [2024-11-20 13:48:34.958577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.752 ms 00:28:42.928 [2024-11-20 13:48:34.958592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.928 [2024-11-20 13:48:34.958833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.928 [2024-11-20 13:48:34.958857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:42.928 [2024-11-20 13:48:34.958904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:28:42.928 [2024-11-20 13:48:34.958920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.187 [2024-11-20 13:48:34.990910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.187 [2024-11-20 13:48:34.990976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:43.187 [2024-11-20 13:48:34.991002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.945 ms 00:28:43.187 [2024-11-20 13:48:34.991025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.187 [2024-11-20 13:48:35.028038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.187 [2024-11-20 13:48:35.028135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:43.187 [2024-11-20 13:48:35.028160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.917 ms 00:28:43.187 [2024-11-20 13:48:35.028174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.187 [2024-11-20 13:48:35.061797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.187 [2024-11-20 13:48:35.061901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:43.187 [2024-11-20 13:48:35.061930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.508 ms 00:28:43.187 [2024-11-20 13:48:35.061944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.187 [2024-11-20 13:48:35.094105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.187 [2024-11-20 13:48:35.094179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:43.187 [2024-11-20 13:48:35.094203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.944 ms 00:28:43.187 [2024-11-20 13:48:35.094218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.187 [2024-11-20 13:48:35.094295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:43.187 [2024-11-20 13:48:35.094322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:43.187 [2024-11-20 13:48:35.094573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.094999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:43.188 [2024-11-20 13:48:35.095837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:43.188 [2024-11-20 13:48:35.095857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:28:43.188 [2024-11-20 13:48:35.095883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:43.188 [2024-11-20 13:48:35.095901] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:43.188 [2024-11-20 13:48:35.095914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:43.188 [2024-11-20 13:48:35.095933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:43.188 [2024-11-20 13:48:35.095945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:43.189 [2024-11-20 13:48:35.095959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:43.189 [2024-11-20 13:48:35.095972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:43.189 [2024-11-20 13:48:35.095985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:43.189 [2024-11-20 13:48:35.095996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:43.189 [2024-11-20 13:48:35.096011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.189 [2024-11-20 13:48:35.096024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:43.189 [2024-11-20 13:48:35.096040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:28:43.189 [2024-11-20 13:48:35.096052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.113116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.189 [2024-11-20 13:48:35.113170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:43.189 [2024-11-20 13:48:35.113193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.965 ms 00:28:43.189 [2024-11-20 13:48:35.113206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.113664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.189 [2024-11-20 13:48:35.113696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:43.189 [2024-11-20 13:48:35.113719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:28:43.189 [2024-11-20 13:48:35.113733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.169476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.189 [2024-11-20 13:48:35.169549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:43.189 [2024-11-20 13:48:35.169573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.189 [2024-11-20 13:48:35.169587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.169681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.189 [2024-11-20 13:48:35.169699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:43.189 [2024-11-20 13:48:35.169719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.189 [2024-11-20 13:48:35.169732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.169913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.189 [2024-11-20 13:48:35.169936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:43.189 [2024-11-20 13:48:35.169952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.189 [2024-11-20 13:48:35.169971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.189 [2024-11-20 13:48:35.170009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.189 [2024-11-20 13:48:35.170036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:43.189 [2024-11-20 13:48:35.170065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.189 [2024-11-20 13:48:35.170080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.275399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.275484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:43.448 [2024-11-20 13:48:35.275509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.275522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.364248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.364334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:43.448 [2024-11-20 13:48:35.364361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.364377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.364547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.364569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:43.448 [2024-11-20 13:48:35.364586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.364598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.364681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.364701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:43.448 [2024-11-20 13:48:35.364719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.364732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.364903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.364934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:43.448 [2024-11-20 13:48:35.364952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.364966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.365037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.365058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:43.448 [2024-11-20 13:48:35.365074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.365087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.365144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.365169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:43.448 [2024-11-20 13:48:35.365186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.365199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.365262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:43.448 [2024-11-20 13:48:35.365281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:43.448 [2024-11-20 13:48:35.365297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:43.448 [2024-11-20 13:48:35.365310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.448 [2024-11-20 13:48:35.365482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.195 ms, result 0 00:28:43.448 true 00:28:43.448 13:48:35 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79223 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79223 ']' 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79223 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79223 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.448 killing process with pid 79223 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79223' 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79223 00:28:43.448 13:48:35 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79223 00:28:48.715 13:48:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:54.052 262144+0 records in 00:28:54.052 262144+0 records out 00:28:54.052 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.03491 s, 213 MB/s 00:28:54.052 13:48:45 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:55.426 13:48:47 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:55.684 [2024-11-20 13:48:47.536885] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:28:55.684 [2024-11-20 13:48:47.537065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79471 ] 00:28:55.942 [2024-11-20 13:48:47.731579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.942 [2024-11-20 13:48:47.835444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.200 [2024-11-20 13:48:48.169838] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.200 [2024-11-20 13:48:48.169948] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.460 [2024-11-20 13:48:48.340480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.340555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:56.460 [2024-11-20 13:48:48.340585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:56.460 [2024-11-20 13:48:48.340598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.340675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.340694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.460 [2024-11-20 13:48:48.340715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:56.460 [2024-11-20 13:48:48.340726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.340757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:56.460 [2024-11-20 13:48:48.341701] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:56.460 [2024-11-20 13:48:48.341743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.341757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.460 [2024-11-20 13:48:48.341769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:28:56.460 [2024-11-20 13:48:48.341781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.342952] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:56.460 [2024-11-20 13:48:48.359678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.359736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:56.460 [2024-11-20 13:48:48.359754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.727 ms 00:28:56.460 [2024-11-20 13:48:48.359766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.359883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.359904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:56.460 [2024-11-20 13:48:48.359917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:56.460 [2024-11-20 13:48:48.359928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.364433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.364489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.460 [2024-11-20 13:48:48.364512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:28:56.460 [2024-11-20 13:48:48.364542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.364684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.364704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.460 [2024-11-20 13:48:48.364717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:56.460 [2024-11-20 13:48:48.364728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.364797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.364814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:56.460 [2024-11-20 13:48:48.364826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:56.460 [2024-11-20 13:48:48.364837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.364896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:56.460 [2024-11-20 13:48:48.369211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.369250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.460 [2024-11-20 13:48:48.369266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:28:56.460 [2024-11-20 13:48:48.369288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.369330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.369344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:56.460 [2024-11-20 13:48:48.369356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:56.460 [2024-11-20 13:48:48.369367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.369421] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:56.460 [2024-11-20 13:48:48.369460] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:56.460 [2024-11-20 13:48:48.369505] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:56.460 [2024-11-20 13:48:48.369536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:56.460 [2024-11-20 13:48:48.369649] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:56.460 [2024-11-20 13:48:48.369665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:56.460 [2024-11-20 13:48:48.369679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:56.460 [2024-11-20 13:48:48.369694] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:56.460 [2024-11-20 13:48:48.369706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:56.460 [2024-11-20 13:48:48.369736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:56.460 [2024-11-20 13:48:48.369747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:56.460 [2024-11-20 13:48:48.369758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:56.460 [2024-11-20 13:48:48.369777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:56.460 [2024-11-20 13:48:48.369789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.369800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:56.460 [2024-11-20 13:48:48.369812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:28:56.460 [2024-11-20 13:48:48.369824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.369942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.460 [2024-11-20 13:48:48.369960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:56.460 [2024-11-20 13:48:48.369972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:56.460 [2024-11-20 13:48:48.369983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.460 [2024-11-20 13:48:48.370145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:56.460 [2024-11-20 13:48:48.370166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:56.460 [2024-11-20 13:48:48.370179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:56.460 [2024-11-20 13:48:48.370190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:56.460 [2024-11-20 13:48:48.370212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:56.460 [2024-11-20 13:48:48.370233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:56.460 [2024-11-20 13:48:48.370243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:56.460 [2024-11-20 13:48:48.370264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:56.460 [2024-11-20 13:48:48.370275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:56.460 [2024-11-20 13:48:48.370285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:56.460 [2024-11-20 13:48:48.370296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:56.460 [2024-11-20 13:48:48.370306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:56.460 [2024-11-20 13:48:48.370337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:56.460 [2024-11-20 13:48:48.370358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:56.460 [2024-11-20 13:48:48.370368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:56.460 [2024-11-20 13:48:48.370388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:56.460 [2024-11-20 13:48:48.370399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:56.461 [2024-11-20 13:48:48.370420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:56.461 [2024-11-20 13:48:48.370449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:56.461 [2024-11-20 13:48:48.370479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:56.461 [2024-11-20 13:48:48.370509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:56.461 [2024-11-20 13:48:48.370528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:56.461 [2024-11-20 13:48:48.370538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:56.461 [2024-11-20 13:48:48.370548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:56.461 [2024-11-20 13:48:48.370558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:56.461 [2024-11-20 13:48:48.370569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:56.461 [2024-11-20 13:48:48.370578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:56.461 [2024-11-20 13:48:48.370598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:56.461 [2024-11-20 13:48:48.370608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:56.461 [2024-11-20 13:48:48.370629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:56.461 [2024-11-20 13:48:48.370640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:56.461 [2024-11-20 13:48:48.370663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:56.461 [2024-11-20 13:48:48.370674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:56.461 [2024-11-20 13:48:48.370684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:56.461 [2024-11-20 13:48:48.370714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:56.461 [2024-11-20 13:48:48.370730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:56.461 [2024-11-20 13:48:48.370747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:56.461 [2024-11-20 13:48:48.370766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:56.461 [2024-11-20 13:48:48.370780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.370792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:56.461 [2024-11-20 13:48:48.370803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:56.461 [2024-11-20 13:48:48.370814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:56.461 [2024-11-20 13:48:48.370825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:56.461 [2024-11-20 13:48:48.370836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:56.461 [2024-11-20 13:48:48.370847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:56.461 [2024-11-20 13:48:48.370858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:56.461 [2024-11-20 13:48:48.370884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:56.461 [2024-11-20 13:48:48.370898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:56.461 [2024-11-20 13:48:48.370909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.370920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.370931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.370942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.370953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:56.461 [2024-11-20 13:48:48.370964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:56.461 [2024-11-20 13:48:48.370989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.371001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:56.461 [2024-11-20 13:48:48.371013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:56.461 [2024-11-20 13:48:48.371024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:56.461 [2024-11-20 13:48:48.371035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:56.461 [2024-11-20 13:48:48.371047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.371058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:56.461 [2024-11-20 13:48:48.371070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:28:56.461 [2024-11-20 13:48:48.371082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.406395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.406475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.461 [2024-11-20 13:48:48.406502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.231 ms 00:28:56.461 [2024-11-20 13:48:48.406515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.406647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.406666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:56.461 [2024-11-20 13:48:48.406679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:56.461 [2024-11-20 13:48:48.406705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.467417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.467487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.461 [2024-11-20 13:48:48.467506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.613 ms 00:28:56.461 [2024-11-20 13:48:48.467518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.467598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.467616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.461 [2024-11-20 13:48:48.467636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:56.461 [2024-11-20 13:48:48.467648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.468095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.468127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.461 [2024-11-20 13:48:48.468141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:28:56.461 [2024-11-20 13:48:48.468153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.468315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.468348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.461 [2024-11-20 13:48:48.468362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:28:56.461 [2024-11-20 13:48:48.468381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.461 [2024-11-20 13:48:48.486558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.461 [2024-11-20 13:48:48.486619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.461 [2024-11-20 13:48:48.486650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:28:56.461 [2024-11-20 13:48:48.486662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.507411] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:56.720 [2024-11-20 13:48:48.507502] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:56.720 [2024-11-20 13:48:48.507527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.507541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:56.720 [2024-11-20 13:48:48.507556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.666 ms 00:28:56.720 [2024-11-20 13:48:48.507567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.539160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.539269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:56.720 [2024-11-20 13:48:48.539290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.433 ms 00:28:56.720 [2024-11-20 13:48:48.539302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.555850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.555956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:56.720 [2024-11-20 13:48:48.555976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.452 ms 00:28:56.720 [2024-11-20 13:48:48.555987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.571848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.571933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:56.720 [2024-11-20 13:48:48.571957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.795 ms 00:28:56.720 [2024-11-20 13:48:48.571969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.573048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.573091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:56.720 [2024-11-20 13:48:48.573114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:28:56.720 [2024-11-20 13:48:48.573129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.651121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.651195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:56.720 [2024-11-20 13:48:48.651223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.945 ms 00:28:56.720 [2024-11-20 13:48:48.651254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.664349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:56.720 [2024-11-20 13:48:48.667172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.667217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:56.720 [2024-11-20 13:48:48.667236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:28:56.720 [2024-11-20 13:48:48.667250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.667413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.667456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:56.720 [2024-11-20 13:48:48.667473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:56.720 [2024-11-20 13:48:48.667485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.667615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.667636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:56.720 [2024-11-20 13:48:48.667649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:56.720 [2024-11-20 13:48:48.667661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.667694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.667710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:56.720 [2024-11-20 13:48:48.667722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:56.720 [2024-11-20 13:48:48.667733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.720 [2024-11-20 13:48:48.667790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:56.720 [2024-11-20 13:48:48.667810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.720 [2024-11-20 13:48:48.667833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:56.721 [2024-11-20 13:48:48.667846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:56.721 [2024-11-20 13:48:48.667857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.721 [2024-11-20 13:48:48.699713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.721 [2024-11-20 13:48:48.699782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:56.721 [2024-11-20 13:48:48.699801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.805 ms 00:28:56.721 [2024-11-20 13:48:48.699814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.721 [2024-11-20 13:48:48.699979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.721 [2024-11-20 13:48:48.700002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:56.721 [2024-11-20 13:48:48.700015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:56.721 [2024-11-20 13:48:48.700027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.721 [2024-11-20 13:48:48.701388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.232 ms, result 0 00:28:58.111  [2024-11-20T13:48:50.716Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T13:48:52.154Z] Copying: 53/1024 [MB] (25 MBps) [2024-11-20T13:48:52.721Z] Copying: 79/1024 [MB] (25 MBps) [2024-11-20T13:48:54.095Z] Copying: 107/1024 [MB] (27 MBps) [2024-11-20T13:48:55.030Z] Copying: 135/1024 [MB] (27 MBps) [2024-11-20T13:48:55.970Z] Copying: 161/1024 [MB] (26 MBps) [2024-11-20T13:48:56.905Z] Copying: 187/1024 [MB] (25 MBps) [2024-11-20T13:48:57.840Z] Copying: 214/1024 [MB] (27 MBps) [2024-11-20T13:48:58.800Z] Copying: 242/1024 [MB] (27 MBps) [2024-11-20T13:48:59.732Z] Copying: 268/1024 [MB] (26 MBps) [2024-11-20T13:49:01.110Z] Copying: 296/1024 [MB] (27 MBps) [2024-11-20T13:49:02.043Z] Copying: 325/1024 [MB] (29 MBps) [2024-11-20T13:49:03.044Z] Copying: 353/1024 [MB] (27 MBps) [2024-11-20T13:49:03.979Z] Copying: 377/1024 [MB] (24 MBps) [2024-11-20T13:49:04.915Z] Copying: 406/1024 [MB] (28 MBps) [2024-11-20T13:49:05.850Z] Copying: 433/1024 [MB] (27 MBps) [2024-11-20T13:49:06.785Z] Copying: 461/1024 [MB] (27 MBps) [2024-11-20T13:49:07.720Z] Copying: 490/1024 [MB] (29 MBps) [2024-11-20T13:49:09.120Z] Copying: 519/1024 [MB] (28 MBps) [2024-11-20T13:49:10.060Z] Copying: 544/1024 [MB] (25 MBps) [2024-11-20T13:49:11.066Z] Copying: 571/1024 [MB] (26 MBps) [2024-11-20T13:49:12.001Z] Copying: 600/1024 [MB] (29 MBps) [2024-11-20T13:49:12.936Z] Copying: 630/1024 [MB] (29 MBps) [2024-11-20T13:49:13.894Z] Copying: 658/1024 [MB] (28 MBps) [2024-11-20T13:49:14.828Z] Copying: 687/1024 [MB] (29 MBps) [2024-11-20T13:49:15.763Z] Copying: 716/1024 [MB] (29 MBps) [2024-11-20T13:49:17.142Z] Copying: 746/1024 [MB] (29 MBps) [2024-11-20T13:49:18.077Z] Copying: 773/1024 [MB] (26 MBps) [2024-11-20T13:49:19.034Z] Copying: 801/1024 [MB] (28 MBps) [2024-11-20T13:49:19.967Z] Copying: 830/1024 [MB] (28 MBps) [2024-11-20T13:49:20.929Z] Copying: 858/1024 [MB] (28 MBps) [2024-11-20T13:49:21.860Z] Copying: 885/1024 [MB] (26 MBps) [2024-11-20T13:49:22.793Z] Copying: 911/1024 [MB] (25 MBps) [2024-11-20T13:49:23.727Z] Copying: 939/1024 [MB] (28 MBps) [2024-11-20T13:49:25.102Z] Copying: 968/1024 [MB] (28 MBps) [2024-11-20T13:49:26.039Z] Copying: 996/1024 [MB] (28 MBps) [2024-11-20T13:49:26.039Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:49:25.696214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.696280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:34.000 [2024-11-20 13:49:25.696300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:34.000 [2024-11-20 13:49:25.696312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.696342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:34.000 [2024-11-20 13:49:25.699689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.699727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:34.000 [2024-11-20 13:49:25.699741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:29:34.000 [2024-11-20 13:49:25.699760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.701341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.701385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:34.000 [2024-11-20 13:49:25.701401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:29:34.000 [2024-11-20 13:49:25.701413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.717672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.717726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:34.000 [2024-11-20 13:49:25.717745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.236 ms 00:29:34.000 [2024-11-20 13:49:25.717757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.724496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.724533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:34.000 [2024-11-20 13:49:25.724547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:29:34.000 [2024-11-20 13:49:25.724558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.756037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.756092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:34.000 [2024-11-20 13:49:25.756110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.397 ms 00:29:34.000 [2024-11-20 13:49:25.756120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.773993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.774056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:34.000 [2024-11-20 13:49:25.774075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.814 ms 00:29:34.000 [2024-11-20 13:49:25.774086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.774262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.774282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:34.000 [2024-11-20 13:49:25.774303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:29:34.000 [2024-11-20 13:49:25.774314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.806775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.806838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:34.000 [2024-11-20 13:49:25.806856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.438 ms 00:29:34.000 [2024-11-20 13:49:25.806880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.838397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.838448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:34.000 [2024-11-20 13:49:25.838483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.454 ms 00:29:34.000 [2024-11-20 13:49:25.838494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.869905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.869970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:34.000 [2024-11-20 13:49:25.869989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.352 ms 00:29:34.000 [2024-11-20 13:49:25.870001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.901259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.000 [2024-11-20 13:49:25.901325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:34.000 [2024-11-20 13:49:25.901346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.153 ms 00:29:34.000 [2024-11-20 13:49:25.901357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.000 [2024-11-20 13:49:25.901408] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:34.000 [2024-11-20 13:49:25.901431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:34.000 [2024-11-20 13:49:25.901745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.901995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:34.001 [2024-11-20 13:49:25.902653] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:34.001 [2024-11-20 13:49:25.902673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:29:34.001 [2024-11-20 13:49:25.902690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:34.001 [2024-11-20 13:49:25.902711] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:34.001 [2024-11-20 13:49:25.902722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:34.001 [2024-11-20 13:49:25.902733] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:34.001 [2024-11-20 13:49:25.902744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:34.001 [2024-11-20 13:49:25.902755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:34.001 [2024-11-20 13:49:25.902766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:34.001 [2024-11-20 13:49:25.902790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:34.001 [2024-11-20 13:49:25.902800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:34.001 [2024-11-20 13:49:25.902811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.001 [2024-11-20 13:49:25.902823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:34.001 [2024-11-20 13:49:25.902835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:29:34.001 [2024-11-20 13:49:25.902845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.001 [2024-11-20 13:49:25.919485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.001 [2024-11-20 13:49:25.919682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:34.001 [2024-11-20 13:49:25.919713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.578 ms 00:29:34.001 [2024-11-20 13:49:25.919726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.001 [2024-11-20 13:49:25.920310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.001 [2024-11-20 13:49:25.920350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:34.002 [2024-11-20 13:49:25.920366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:29:34.002 [2024-11-20 13:49:25.920378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.002 [2024-11-20 13:49:25.965082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.002 [2024-11-20 13:49:25.965167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:34.002 [2024-11-20 13:49:25.965186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.002 [2024-11-20 13:49:25.965197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.002 [2024-11-20 13:49:25.965282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.002 [2024-11-20 13:49:25.965297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:34.002 [2024-11-20 13:49:25.965309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.002 [2024-11-20 13:49:25.965320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.002 [2024-11-20 13:49:25.965434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.002 [2024-11-20 13:49:25.965454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:34.002 [2024-11-20 13:49:25.965466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.002 [2024-11-20 13:49:25.965477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.002 [2024-11-20 13:49:25.965499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.002 [2024-11-20 13:49:25.965513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:34.002 [2024-11-20 13:49:25.965524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.002 [2024-11-20 13:49:25.965535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.260 [2024-11-20 13:49:26.069832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.260 [2024-11-20 13:49:26.070097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:34.260 [2024-11-20 13:49:26.070129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.260 [2024-11-20 13:49:26.070142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.260 [2024-11-20 13:49:26.155298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.260 [2024-11-20 13:49:26.155365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:34.260 [2024-11-20 13:49:26.155386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.260 [2024-11-20 13:49:26.155398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.260 [2024-11-20 13:49:26.155538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.260 [2024-11-20 13:49:26.155558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:34.260 [2024-11-20 13:49:26.155570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.260 [2024-11-20 13:49:26.155581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.260 [2024-11-20 13:49:26.155629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.260 [2024-11-20 13:49:26.155643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:34.260 [2024-11-20 13:49:26.155655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.260 [2024-11-20 13:49:26.155666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.260 [2024-11-20 13:49:26.155787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.260 [2024-11-20 13:49:26.155812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:34.260 [2024-11-20 13:49:26.155825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.261 [2024-11-20 13:49:26.155835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.261 [2024-11-20 13:49:26.155914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.261 [2024-11-20 13:49:26.155934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:34.261 [2024-11-20 13:49:26.155947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.261 [2024-11-20 13:49:26.155957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.261 [2024-11-20 13:49:26.156001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.261 [2024-11-20 13:49:26.156023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:34.261 [2024-11-20 13:49:26.156035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.261 [2024-11-20 13:49:26.156045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.261 [2024-11-20 13:49:26.156096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:34.261 [2024-11-20 13:49:26.156112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:34.261 [2024-11-20 13:49:26.156124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:34.261 [2024-11-20 13:49:26.156134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.261 [2024-11-20 13:49:26.156277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.028 ms, result 0 00:29:35.637 00:29:35.637 00:29:35.637 13:49:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:35.896 [2024-11-20 13:49:27.713600] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:29:35.896 [2024-11-20 13:49:27.713981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79863 ] 00:29:35.896 [2024-11-20 13:49:27.886053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.154 [2024-11-20 13:49:27.988624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.413 [2024-11-20 13:49:28.305333] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:36.413 [2024-11-20 13:49:28.305640] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:36.713 [2024-11-20 13:49:28.466033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.466108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:36.713 [2024-11-20 13:49:28.466134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.713 [2024-11-20 13:49:28.466147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.466222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.466241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:36.713 [2024-11-20 13:49:28.466258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:36.713 [2024-11-20 13:49:28.466269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.466299] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:36.713 [2024-11-20 13:49:28.467270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:36.713 [2024-11-20 13:49:28.467316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.467332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:36.713 [2024-11-20 13:49:28.467345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:29:36.713 [2024-11-20 13:49:28.467356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.468505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:36.713 [2024-11-20 13:49:28.486248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.486371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:36.713 [2024-11-20 13:49:28.486404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.731 ms 00:29:36.713 [2024-11-20 13:49:28.486427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.486584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.486604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:36.713 [2024-11-20 13:49:28.486619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:36.713 [2024-11-20 13:49:28.486630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.491733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.492072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:36.713 [2024-11-20 13:49:28.492116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:29:36.713 [2024-11-20 13:49:28.492146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.492269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.492288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:36.713 [2024-11-20 13:49:28.492302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:36.713 [2024-11-20 13:49:28.492314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.492405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.492422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:36.713 [2024-11-20 13:49:28.492434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:36.713 [2024-11-20 13:49:28.492445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.492484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:36.713 [2024-11-20 13:49:28.496840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.496902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:36.713 [2024-11-20 13:49:28.496918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:29:36.713 [2024-11-20 13:49:28.496936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.496985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.496999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:36.713 [2024-11-20 13:49:28.497011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:36.713 [2024-11-20 13:49:28.497023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.497119] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:36.713 [2024-11-20 13:49:28.497151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:36.713 [2024-11-20 13:49:28.497197] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:36.713 [2024-11-20 13:49:28.497223] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:36.713 [2024-11-20 13:49:28.497339] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:36.713 [2024-11-20 13:49:28.497362] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:36.713 [2024-11-20 13:49:28.497378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:36.713 [2024-11-20 13:49:28.497392] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:36.713 [2024-11-20 13:49:28.497406] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:36.713 [2024-11-20 13:49:28.497418] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:36.713 [2024-11-20 13:49:28.497429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:36.713 [2024-11-20 13:49:28.497439] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:36.713 [2024-11-20 13:49:28.497456] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:36.713 [2024-11-20 13:49:28.497468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.497479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:36.713 [2024-11-20 13:49:28.497491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:29:36.713 [2024-11-20 13:49:28.497502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.497603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.713 [2024-11-20 13:49:28.497618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:36.713 [2024-11-20 13:49:28.497629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:36.713 [2024-11-20 13:49:28.497640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.713 [2024-11-20 13:49:28.497765] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:36.713 [2024-11-20 13:49:28.497785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:36.713 [2024-11-20 13:49:28.497797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.713 [2024-11-20 13:49:28.497808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.713 [2024-11-20 13:49:28.497820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:36.713 [2024-11-20 13:49:28.497830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:36.713 [2024-11-20 13:49:28.497841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:36.713 [2024-11-20 13:49:28.497851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:36.714 [2024-11-20 13:49:28.497861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:36.714 [2024-11-20 13:49:28.497898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.714 [2024-11-20 13:49:28.497911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:36.714 [2024-11-20 13:49:28.497922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:36.714 [2024-11-20 13:49:28.497931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.714 [2024-11-20 13:49:28.497942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:36.714 [2024-11-20 13:49:28.497952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:36.714 [2024-11-20 13:49:28.497977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.714 [2024-11-20 13:49:28.497988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:36.714 [2024-11-20 13:49:28.497998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:36.714 [2024-11-20 13:49:28.498031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:36.714 [2024-11-20 13:49:28.498062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:36.714 [2024-11-20 13:49:28.498092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:36.714 [2024-11-20 13:49:28.498122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:36.714 [2024-11-20 13:49:28.498152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.714 [2024-11-20 13:49:28.498172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:36.714 [2024-11-20 13:49:28.498182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:36.714 [2024-11-20 13:49:28.498192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.714 [2024-11-20 13:49:28.498203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:36.714 [2024-11-20 13:49:28.498213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:36.714 [2024-11-20 13:49:28.498223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:36.714 [2024-11-20 13:49:28.498243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:36.714 [2024-11-20 13:49:28.498253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:36.714 [2024-11-20 13:49:28.498275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:36.714 [2024-11-20 13:49:28.498285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.714 [2024-11-20 13:49:28.498307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:36.714 [2024-11-20 13:49:28.498319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:36.714 [2024-11-20 13:49:28.498329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:36.714 [2024-11-20 13:49:28.498339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:36.714 [2024-11-20 13:49:28.498349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:36.714 [2024-11-20 13:49:28.498360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:36.714 [2024-11-20 13:49:28.498372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:36.714 [2024-11-20 13:49:28.498386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:36.714 [2024-11-20 13:49:28.498411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:36.714 [2024-11-20 13:49:28.498423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:36.714 [2024-11-20 13:49:28.498434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:36.714 [2024-11-20 13:49:28.498444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:36.714 [2024-11-20 13:49:28.498455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:36.714 [2024-11-20 13:49:28.498466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:36.714 [2024-11-20 13:49:28.498477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:36.714 [2024-11-20 13:49:28.498488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:36.714 [2024-11-20 13:49:28.498499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:36.714 [2024-11-20 13:49:28.498554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:36.714 [2024-11-20 13:49:28.498573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:36.714 [2024-11-20 13:49:28.498596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:36.714 [2024-11-20 13:49:28.498608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:36.714 [2024-11-20 13:49:28.498619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:36.714 [2024-11-20 13:49:28.498631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.714 [2024-11-20 13:49:28.498643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:36.714 [2024-11-20 13:49:28.498655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:29:36.714 [2024-11-20 13:49:28.498665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.714 [2024-11-20 13:49:28.531600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.714 [2024-11-20 13:49:28.531668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:36.714 [2024-11-20 13:49:28.531689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.856 ms 00:29:36.714 [2024-11-20 13:49:28.531702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.714 [2024-11-20 13:49:28.531825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.714 [2024-11-20 13:49:28.531840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:36.714 [2024-11-20 13:49:28.531853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:36.714 [2024-11-20 13:49:28.531864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.714 [2024-11-20 13:49:28.588022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.714 [2024-11-20 13:49:28.588091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:36.714 [2024-11-20 13:49:28.588113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.024 ms 00:29:36.714 [2024-11-20 13:49:28.588124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.714 [2024-11-20 13:49:28.588204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.714 [2024-11-20 13:49:28.588221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:36.715 [2024-11-20 13:49:28.588242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:36.715 [2024-11-20 13:49:28.588254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.588646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.588666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:36.715 [2024-11-20 13:49:28.588679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:29:36.715 [2024-11-20 13:49:28.588690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.588848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.588894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:36.715 [2024-11-20 13:49:28.588911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:29:36.715 [2024-11-20 13:49:28.588930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.605698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.605765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:36.715 [2024-11-20 13:49:28.605792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.737 ms 00:29:36.715 [2024-11-20 13:49:28.605804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.623484] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:36.715 [2024-11-20 13:49:28.623679] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:36.715 [2024-11-20 13:49:28.623707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.623720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:36.715 [2024-11-20 13:49:28.623734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.648 ms 00:29:36.715 [2024-11-20 13:49:28.623745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.653694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.653744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:36.715 [2024-11-20 13:49:28.653763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.880 ms 00:29:36.715 [2024-11-20 13:49:28.653775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.669501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.669684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:36.715 [2024-11-20 13:49:28.669713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:29:36.715 [2024-11-20 13:49:28.669726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.685474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.685637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:36.715 [2024-11-20 13:49:28.685752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.669 ms 00:29:36.715 [2024-11-20 13:49:28.685803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.715 [2024-11-20 13:49:28.686661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.715 [2024-11-20 13:49:28.686828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:36.715 [2024-11-20 13:49:28.686970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:29:36.715 [2024-11-20 13:49:28.687105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.760160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.760477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:36.974 [2024-11-20 13:49:28.760616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.978 ms 00:29:36.974 [2024-11-20 13:49:28.760668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.773943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:36.974 [2024-11-20 13:49:28.777013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.777229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:36.974 [2024-11-20 13:49:28.777392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.164 ms 00:29:36.974 [2024-11-20 13:49:28.777448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.777689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.777845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:36.974 [2024-11-20 13:49:28.777997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:36.974 [2024-11-20 13:49:28.778051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.778254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.778382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:36.974 [2024-11-20 13:49:28.778497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:36.974 [2024-11-20 13:49:28.778610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.778663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.778679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:36.974 [2024-11-20 13:49:28.778692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.974 [2024-11-20 13:49:28.778727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.778771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:36.974 [2024-11-20 13:49:28.778789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.778800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:36.974 [2024-11-20 13:49:28.778812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:36.974 [2024-11-20 13:49:28.778823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.810022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.810077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:36.974 [2024-11-20 13:49:28.810102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.170 ms 00:29:36.974 [2024-11-20 13:49:28.810115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.810206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.974 [2024-11-20 13:49:28.810225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:36.974 [2024-11-20 13:49:28.810238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:36.974 [2024-11-20 13:49:28.810249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.974 [2024-11-20 13:49:28.811447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.935 ms, result 0 00:29:38.348  [2024-11-20T13:49:31.323Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T13:49:32.259Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-20T13:49:33.195Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-20T13:49:34.168Z] Copying: 105/1024 [MB] (25 MBps) [2024-11-20T13:49:35.103Z] Copying: 129/1024 [MB] (23 MBps) [2024-11-20T13:49:36.037Z] Copying: 155/1024 [MB] (25 MBps) [2024-11-20T13:49:37.410Z] Copying: 180/1024 [MB] (24 MBps) [2024-11-20T13:49:38.344Z] Copying: 207/1024 [MB] (27 MBps) [2024-11-20T13:49:39.286Z] Copying: 233/1024 [MB] (26 MBps) [2024-11-20T13:49:40.221Z] Copying: 262/1024 [MB] (28 MBps) [2024-11-20T13:49:41.156Z] Copying: 289/1024 [MB] (26 MBps) [2024-11-20T13:49:42.090Z] Copying: 315/1024 [MB] (26 MBps) [2024-11-20T13:49:43.465Z] Copying: 342/1024 [MB] (26 MBps) [2024-11-20T13:49:44.082Z] Copying: 367/1024 [MB] (25 MBps) [2024-11-20T13:49:45.457Z] Copying: 392/1024 [MB] (24 MBps) [2024-11-20T13:49:46.393Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-20T13:49:47.330Z] Copying: 442/1024 [MB] (25 MBps) [2024-11-20T13:49:48.265Z] Copying: 468/1024 [MB] (25 MBps) [2024-11-20T13:49:49.199Z] Copying: 493/1024 [MB] (25 MBps) [2024-11-20T13:49:50.133Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-20T13:49:51.069Z] Copying: 543/1024 [MB] (24 MBps) [2024-11-20T13:49:52.445Z] Copying: 567/1024 [MB] (23 MBps) [2024-11-20T13:49:53.379Z] Copying: 591/1024 [MB] (24 MBps) [2024-11-20T13:49:54.316Z] Copying: 614/1024 [MB] (23 MBps) [2024-11-20T13:49:55.264Z] Copying: 638/1024 [MB] (23 MBps) [2024-11-20T13:49:56.200Z] Copying: 663/1024 [MB] (25 MBps) [2024-11-20T13:49:57.135Z] Copying: 691/1024 [MB] (27 MBps) [2024-11-20T13:49:58.069Z] Copying: 716/1024 [MB] (25 MBps) [2024-11-20T13:49:59.442Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-20T13:50:00.393Z] Copying: 764/1024 [MB] (24 MBps) [2024-11-20T13:50:01.328Z] Copying: 790/1024 [MB] (26 MBps) [2024-11-20T13:50:02.264Z] Copying: 816/1024 [MB] (25 MBps) [2024-11-20T13:50:03.197Z] Copying: 842/1024 [MB] (26 MBps) [2024-11-20T13:50:04.131Z] Copying: 868/1024 [MB] (25 MBps) [2024-11-20T13:50:05.064Z] Copying: 894/1024 [MB] (26 MBps) [2024-11-20T13:50:06.441Z] Copying: 919/1024 [MB] (25 MBps) [2024-11-20T13:50:07.376Z] Copying: 945/1024 [MB] (25 MBps) [2024-11-20T13:50:08.307Z] Copying: 971/1024 [MB] (26 MBps) [2024-11-20T13:50:09.245Z] Copying: 997/1024 [MB] (26 MBps) [2024-11-20T13:50:09.245Z] Copying: 1023/1024 [MB] (25 MBps) [2024-11-20T13:50:09.245Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 13:50:09.154694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.154998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:17.206 [2024-11-20 13:50:09.155143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:17.206 [2024-11-20 13:50:09.155171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.155215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:17.206 [2024-11-20 13:50:09.159071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.159123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:17.206 [2024-11-20 13:50:09.159140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:30:17.206 [2024-11-20 13:50:09.159152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.159422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.159449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:17.206 [2024-11-20 13:50:09.159462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:30:17.206 [2024-11-20 13:50:09.159473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.163020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.163052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:17.206 [2024-11-20 13:50:09.163065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:30:17.206 [2024-11-20 13:50:09.163090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.169886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.169919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:17.206 [2024-11-20 13:50:09.169933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.773 ms 00:30:17.206 [2024-11-20 13:50:09.169944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.201535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.201583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:17.206 [2024-11-20 13:50:09.201601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.517 ms 00:30:17.206 [2024-11-20 13:50:09.201613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.219296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.219359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:17.206 [2024-11-20 13:50:09.219377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.635 ms 00:30:17.206 [2024-11-20 13:50:09.219389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.206 [2024-11-20 13:50:09.219540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.206 [2024-11-20 13:50:09.219559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:17.206 [2024-11-20 13:50:09.219572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:17.206 [2024-11-20 13:50:09.219583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.465 [2024-11-20 13:50:09.251670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.465 [2024-11-20 13:50:09.251728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:17.465 [2024-11-20 13:50:09.251746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.065 ms 00:30:17.465 [2024-11-20 13:50:09.251758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.465 [2024-11-20 13:50:09.283480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.465 [2024-11-20 13:50:09.283571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:17.465 [2024-11-20 13:50:09.283592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.666 ms 00:30:17.466 [2024-11-20 13:50:09.283604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.466 [2024-11-20 13:50:09.315085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.466 [2024-11-20 13:50:09.315149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:17.466 [2024-11-20 13:50:09.315167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.416 ms 00:30:17.466 [2024-11-20 13:50:09.315179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.466 [2024-11-20 13:50:09.347400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.466 [2024-11-20 13:50:09.347469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:17.466 [2024-11-20 13:50:09.347489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.112 ms 00:30:17.466 [2024-11-20 13:50:09.347501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.466 [2024-11-20 13:50:09.347550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:17.466 [2024-11-20 13:50:09.347585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.347999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:17.466 [2024-11-20 13:50:09.348354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:17.467 [2024-11-20 13:50:09.348762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:17.467 [2024-11-20 13:50:09.348773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:30:17.467 [2024-11-20 13:50:09.348785] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:17.467 [2024-11-20 13:50:09.348796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:17.467 [2024-11-20 13:50:09.348806] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:17.467 [2024-11-20 13:50:09.348817] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:17.467 [2024-11-20 13:50:09.348829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:17.467 [2024-11-20 13:50:09.348841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:17.467 [2024-11-20 13:50:09.348878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:17.467 [2024-11-20 13:50:09.348890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:17.467 [2024-11-20 13:50:09.348901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:17.467 [2024-11-20 13:50:09.348912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.467 [2024-11-20 13:50:09.348924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:17.467 [2024-11-20 13:50:09.348936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:30:17.467 [2024-11-20 13:50:09.348951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.365571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.467 [2024-11-20 13:50:09.365612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:17.467 [2024-11-20 13:50:09.365629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:30:17.467 [2024-11-20 13:50:09.365640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.366094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.467 [2024-11-20 13:50:09.366124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:17.467 [2024-11-20 13:50:09.366146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:30:17.467 [2024-11-20 13:50:09.366158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.409631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.467 [2024-11-20 13:50:09.409718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.467 [2024-11-20 13:50:09.409738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.467 [2024-11-20 13:50:09.409750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.409833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.467 [2024-11-20 13:50:09.409849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.467 [2024-11-20 13:50:09.409881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.467 [2024-11-20 13:50:09.409894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.409999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.467 [2024-11-20 13:50:09.410018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.467 [2024-11-20 13:50:09.410031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.467 [2024-11-20 13:50:09.410041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.467 [2024-11-20 13:50:09.410063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.467 [2024-11-20 13:50:09.410076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.467 [2024-11-20 13:50:09.410088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.467 [2024-11-20 13:50:09.410105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.726 [2024-11-20 13:50:09.514990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.726 [2024-11-20 13:50:09.515062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:17.726 [2024-11-20 13:50:09.515082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.726 [2024-11-20 13:50:09.515094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.726 [2024-11-20 13:50:09.600379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.726 [2024-11-20 13:50:09.600458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:17.726 [2024-11-20 13:50:09.600490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.600503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.600618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.600637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:17.727 [2024-11-20 13:50:09.600649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.600660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.600707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.600721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:17.727 [2024-11-20 13:50:09.600733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.600744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.600916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.600945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:17.727 [2024-11-20 13:50:09.600958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.600970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.601023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.601042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:17.727 [2024-11-20 13:50:09.601054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.601065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.601115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.601133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:17.727 [2024-11-20 13:50:09.601144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.601155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.601206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.727 [2024-11-20 13:50:09.601222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:17.727 [2024-11-20 13:50:09.601234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.727 [2024-11-20 13:50:09.601245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.727 [2024-11-20 13:50:09.601389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.690 ms, result 0 00:30:18.707 00:30:18.707 00:30:18.707 13:50:10 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:21.241 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:21.241 13:50:12 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:21.241 [2024-11-20 13:50:12.867679] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:30:21.241 [2024-11-20 13:50:12.867832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80312 ] 00:30:21.241 [2024-11-20 13:50:13.041210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.241 [2024-11-20 13:50:13.145095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.500 [2024-11-20 13:50:13.464456] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:21.500 [2024-11-20 13:50:13.464545] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:21.759 [2024-11-20 13:50:13.625838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.759 [2024-11-20 13:50:13.625900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:21.759 [2024-11-20 13:50:13.625926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:21.759 [2024-11-20 13:50:13.625938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.759 [2024-11-20 13:50:13.626011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.759 [2024-11-20 13:50:13.626032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:21.759 [2024-11-20 13:50:13.626049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:21.759 [2024-11-20 13:50:13.626061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.626093] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:21.760 [2024-11-20 13:50:13.627057] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:21.760 [2024-11-20 13:50:13.627095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.627110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:21.760 [2024-11-20 13:50:13.627124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:30:21.760 [2024-11-20 13:50:13.627135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.628317] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:21.760 [2024-11-20 13:50:13.644773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.644833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:21.760 [2024-11-20 13:50:13.644853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.456 ms 00:30:21.760 [2024-11-20 13:50:13.644884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.645171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.645198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:21.760 [2024-11-20 13:50:13.645212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:21.760 [2024-11-20 13:50:13.645224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.649760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.649816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:21.760 [2024-11-20 13:50:13.649833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:30:21.760 [2024-11-20 13:50:13.649853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.649984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.650007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:21.760 [2024-11-20 13:50:13.650020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:30:21.760 [2024-11-20 13:50:13.650031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.650109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.650128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:21.760 [2024-11-20 13:50:13.650152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:21.760 [2024-11-20 13:50:13.650163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.650205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:21.760 [2024-11-20 13:50:13.654543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.654582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:21.760 [2024-11-20 13:50:13.654598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.353 ms 00:30:21.760 [2024-11-20 13:50:13.654615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.654656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.654672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:21.760 [2024-11-20 13:50:13.654685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:21.760 [2024-11-20 13:50:13.654696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.654766] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:21.760 [2024-11-20 13:50:13.654798] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:21.760 [2024-11-20 13:50:13.654841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:21.760 [2024-11-20 13:50:13.654882] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:21.760 [2024-11-20 13:50:13.655002] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:21.760 [2024-11-20 13:50:13.655023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:21.760 [2024-11-20 13:50:13.655038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:21.760 [2024-11-20 13:50:13.655054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655068] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:21.760 [2024-11-20 13:50:13.655091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:21.760 [2024-11-20 13:50:13.655102] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:21.760 [2024-11-20 13:50:13.655118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:21.760 [2024-11-20 13:50:13.655131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.655143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:21.760 [2024-11-20 13:50:13.655154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:30:21.760 [2024-11-20 13:50:13.655165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.655268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.760 [2024-11-20 13:50:13.655289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:21.760 [2024-11-20 13:50:13.655302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:21.760 [2024-11-20 13:50:13.655312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.760 [2024-11-20 13:50:13.655473] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:21.760 [2024-11-20 13:50:13.655497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:21.760 [2024-11-20 13:50:13.655510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:21.760 [2024-11-20 13:50:13.655543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:21.760 [2024-11-20 13:50:13.655575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:21.760 [2024-11-20 13:50:13.655596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:21.760 [2024-11-20 13:50:13.655607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:21.760 [2024-11-20 13:50:13.655618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:21.760 [2024-11-20 13:50:13.655629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:21.760 [2024-11-20 13:50:13.655639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:21.760 [2024-11-20 13:50:13.655663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:21.760 [2024-11-20 13:50:13.655684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:21.760 [2024-11-20 13:50:13.655716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:21.760 [2024-11-20 13:50:13.655747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:21.760 [2024-11-20 13:50:13.655777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:21.760 [2024-11-20 13:50:13.655818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.760 [2024-11-20 13:50:13.655838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:21.760 [2024-11-20 13:50:13.655849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:21.760 [2024-11-20 13:50:13.655887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:21.760 [2024-11-20 13:50:13.655902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:21.760 [2024-11-20 13:50:13.655912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:21.760 [2024-11-20 13:50:13.655923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:21.760 [2024-11-20 13:50:13.655933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:21.760 [2024-11-20 13:50:13.655943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:21.760 [2024-11-20 13:50:13.655963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:21.760 [2024-11-20 13:50:13.655973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.655983] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:21.760 [2024-11-20 13:50:13.655995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:21.760 [2024-11-20 13:50:13.656006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:21.760 [2024-11-20 13:50:13.656017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.760 [2024-11-20 13:50:13.656028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:21.761 [2024-11-20 13:50:13.656039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:21.761 [2024-11-20 13:50:13.656049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:21.761 [2024-11-20 13:50:13.656060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:21.761 [2024-11-20 13:50:13.656070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:21.761 [2024-11-20 13:50:13.656080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:21.761 [2024-11-20 13:50:13.656092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:21.761 [2024-11-20 13:50:13.656107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:21.761 [2024-11-20 13:50:13.656131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:21.761 [2024-11-20 13:50:13.656142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:21.761 [2024-11-20 13:50:13.656153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:21.761 [2024-11-20 13:50:13.656165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:21.761 [2024-11-20 13:50:13.656176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:21.761 [2024-11-20 13:50:13.656187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:21.761 [2024-11-20 13:50:13.656198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:21.761 [2024-11-20 13:50:13.656210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:21.761 [2024-11-20 13:50:13.656221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:21.761 [2024-11-20 13:50:13.656277] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:21.761 [2024-11-20 13:50:13.656295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:21.761 [2024-11-20 13:50:13.656328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:21.761 [2024-11-20 13:50:13.656340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:21.761 [2024-11-20 13:50:13.656351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:21.761 [2024-11-20 13:50:13.656363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.656375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:21.761 [2024-11-20 13:50:13.656387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:30:21.761 [2024-11-20 13:50:13.656398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.690168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.690232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:21.761 [2024-11-20 13:50:13.690252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.705 ms 00:30:21.761 [2024-11-20 13:50:13.690265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.690388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.690405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:21.761 [2024-11-20 13:50:13.690417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:21.761 [2024-11-20 13:50:13.690429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.740013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.740074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:21.761 [2024-11-20 13:50:13.740094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.487 ms 00:30:21.761 [2024-11-20 13:50:13.740106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.740187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.740206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:21.761 [2024-11-20 13:50:13.740226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:21.761 [2024-11-20 13:50:13.740238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.740653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.740673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:21.761 [2024-11-20 13:50:13.740687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:30:21.761 [2024-11-20 13:50:13.740699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.740881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.740904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:21.761 [2024-11-20 13:50:13.740916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:30:21.761 [2024-11-20 13:50:13.740935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.757988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.758044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:21.761 [2024-11-20 13:50:13.758069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:30:21.761 [2024-11-20 13:50:13.758082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.761 [2024-11-20 13:50:13.774616] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:21.761 [2024-11-20 13:50:13.774679] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:21.761 [2024-11-20 13:50:13.774701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.761 [2024-11-20 13:50:13.774723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:21.761 [2024-11-20 13:50:13.774738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.424 ms 00:30:21.761 [2024-11-20 13:50:13.774749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.804846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.804947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:22.020 [2024-11-20 13:50:13.804970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.025 ms 00:30:22.020 [2024-11-20 13:50:13.804982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.821079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.821134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:22.020 [2024-11-20 13:50:13.821164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.033 ms 00:30:22.020 [2024-11-20 13:50:13.821176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.836937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.836993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:22.020 [2024-11-20 13:50:13.837011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.702 ms 00:30:22.020 [2024-11-20 13:50:13.837023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.837839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.837891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:22.020 [2024-11-20 13:50:13.837909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:30:22.020 [2024-11-20 13:50:13.837926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.912809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.912893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:22.020 [2024-11-20 13:50:13.912924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.854 ms 00:30:22.020 [2024-11-20 13:50:13.912937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.925819] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:22.020 [2024-11-20 13:50:13.928489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.928525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:22.020 [2024-11-20 13:50:13.928544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.472 ms 00:30:22.020 [2024-11-20 13:50:13.928556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.928686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.928709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:22.020 [2024-11-20 13:50:13.928723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:22.020 [2024-11-20 13:50:13.928738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.928833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.928853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:22.020 [2024-11-20 13:50:13.928882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:22.020 [2024-11-20 13:50:13.928898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.928932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.928948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:22.020 [2024-11-20 13:50:13.928961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:22.020 [2024-11-20 13:50:13.928972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.929020] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:22.020 [2024-11-20 13:50:13.929038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.929049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:22.020 [2024-11-20 13:50:13.929061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:22.020 [2024-11-20 13:50:13.929072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.961481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.961557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:22.020 [2024-11-20 13:50:13.961579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.379 ms 00:30:22.020 [2024-11-20 13:50:13.961600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.961723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.020 [2024-11-20 13:50:13.961743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:22.020 [2024-11-20 13:50:13.961757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:22.020 [2024-11-20 13:50:13.961768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.020 [2024-11-20 13:50:13.963081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.693 ms, result 0 00:30:22.955  [2024-11-20T13:50:15.980Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T13:50:17.357Z] Copying: 58/1024 [MB] (30 MBps) [2024-11-20T13:50:18.291Z] Copying: 87/1024 [MB] (29 MBps) [2024-11-20T13:50:19.227Z] Copying: 116/1024 [MB] (28 MBps) [2024-11-20T13:50:20.160Z] Copying: 144/1024 [MB] (28 MBps) [2024-11-20T13:50:21.094Z] Copying: 172/1024 [MB] (27 MBps) [2024-11-20T13:50:22.029Z] Copying: 200/1024 [MB] (27 MBps) [2024-11-20T13:50:23.406Z] Copying: 230/1024 [MB] (29 MBps) [2024-11-20T13:50:24.341Z] Copying: 260/1024 [MB] (29 MBps) [2024-11-20T13:50:25.281Z] Copying: 288/1024 [MB] (28 MBps) [2024-11-20T13:50:26.216Z] Copying: 317/1024 [MB] (28 MBps) [2024-11-20T13:50:27.151Z] Copying: 344/1024 [MB] (27 MBps) [2024-11-20T13:50:28.087Z] Copying: 373/1024 [MB] (29 MBps) [2024-11-20T13:50:29.021Z] Copying: 402/1024 [MB] (28 MBps) [2024-11-20T13:50:29.997Z] Copying: 430/1024 [MB] (28 MBps) [2024-11-20T13:50:31.373Z] Copying: 460/1024 [MB] (29 MBps) [2024-11-20T13:50:32.308Z] Copying: 488/1024 [MB] (28 MBps) [2024-11-20T13:50:33.242Z] Copying: 518/1024 [MB] (29 MBps) [2024-11-20T13:50:34.176Z] Copying: 550/1024 [MB] (32 MBps) [2024-11-20T13:50:35.110Z] Copying: 580/1024 [MB] (29 MBps) [2024-11-20T13:50:36.046Z] Copying: 608/1024 [MB] (28 MBps) [2024-11-20T13:50:36.980Z] Copying: 634/1024 [MB] (26 MBps) [2024-11-20T13:50:38.355Z] Copying: 663/1024 [MB] (28 MBps) [2024-11-20T13:50:39.289Z] Copying: 692/1024 [MB] (28 MBps) [2024-11-20T13:50:40.309Z] Copying: 721/1024 [MB] (28 MBps) [2024-11-20T13:50:41.266Z] Copying: 749/1024 [MB] (28 MBps) [2024-11-20T13:50:42.200Z] Copying: 777/1024 [MB] (28 MBps) [2024-11-20T13:50:43.135Z] Copying: 806/1024 [MB] (28 MBps) [2024-11-20T13:50:44.070Z] Copying: 835/1024 [MB] (29 MBps) [2024-11-20T13:50:45.004Z] Copying: 864/1024 [MB] (29 MBps) [2024-11-20T13:50:46.380Z] Copying: 891/1024 [MB] (26 MBps) [2024-11-20T13:50:47.315Z] Copying: 919/1024 [MB] (27 MBps) [2024-11-20T13:50:48.248Z] Copying: 945/1024 [MB] (26 MBps) [2024-11-20T13:50:49.179Z] Copying: 974/1024 [MB] (29 MBps) [2024-11-20T13:50:50.110Z] Copying: 1005/1024 [MB] (31 MBps) [2024-11-20T13:50:51.047Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-20T13:50:51.047Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:50:50.740608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.740743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:59.008 [2024-11-20 13:50:50.740781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:59.008 [2024-11-20 13:50:50.740822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.743612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:59.008 [2024-11-20 13:50:50.751068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.751123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:59.008 [2024-11-20 13:50:50.751142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.377 ms 00:30:59.008 [2024-11-20 13:50:50.751155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.764362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.764423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:59.008 [2024-11-20 13:50:50.764453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.968 ms 00:30:59.008 [2024-11-20 13:50:50.764477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.786324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.786381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:59.008 [2024-11-20 13:50:50.786401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.819 ms 00:30:59.008 [2024-11-20 13:50:50.786413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.793126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.793164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:59.008 [2024-11-20 13:50:50.793179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:30:59.008 [2024-11-20 13:50:50.793191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.826533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.826598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:59.008 [2024-11-20 13:50:50.826616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.242 ms 00:30:59.008 [2024-11-20 13:50:50.826628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.844332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.844389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:59.008 [2024-11-20 13:50:50.844408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.649 ms 00:30:59.008 [2024-11-20 13:50:50.844421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.916795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.916882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:59.008 [2024-11-20 13:50:50.916905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.319 ms 00:30:59.008 [2024-11-20 13:50:50.916917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.948662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.948721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:59.008 [2024-11-20 13:50:50.948740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.719 ms 00:30:59.008 [2024-11-20 13:50:50.948752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:50.979810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:50.979890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:59.008 [2024-11-20 13:50:50.979910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.005 ms 00:30:59.008 [2024-11-20 13:50:50.979921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:51.010615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:51.010671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:59.008 [2024-11-20 13:50:51.010690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.642 ms 00:30:59.008 [2024-11-20 13:50:51.010702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:51.042188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.008 [2024-11-20 13:50:51.042243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:59.008 [2024-11-20 13:50:51.042262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.373 ms 00:30:59.008 [2024-11-20 13:50:51.042274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.008 [2024-11-20 13:50:51.042333] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.008 [2024-11-20 13:50:51.042356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126208 / 261120 wr_cnt: 1 state: open 00:30:59.008 [2024-11-20 13:50:51.042371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.008 [2024-11-20 13:50:51.042488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.042986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.043998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.009 [2024-11-20 13:50:51.044323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.010 [2024-11-20 13:50:51.044346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:30:59.010 [2024-11-20 13:50:51.044366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126208 00:30:59.010 [2024-11-20 13:50:51.044387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127168 00:30:59.010 [2024-11-20 13:50:51.044405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126208 00:30:59.010 [2024-11-20 13:50:51.044420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:30:59.010 [2024-11-20 13:50:51.044431] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.010 [2024-11-20 13:50:51.044450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.010 [2024-11-20 13:50:51.044488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.010 [2024-11-20 13:50:51.044508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.010 [2024-11-20 13:50:51.044527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.010 [2024-11-20 13:50:51.044550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.010 [2024-11-20 13:50:51.044571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.010 [2024-11-20 13:50:51.044594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:30:59.010 [2024-11-20 13:50:51.044615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.061504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.268 [2024-11-20 13:50:51.061552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.268 [2024-11-20 13:50:51.061570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.818 ms 00:30:59.268 [2024-11-20 13:50:51.061592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.062198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.268 [2024-11-20 13:50:51.062253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.268 [2024-11-20 13:50:51.062287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:30:59.268 [2024-11-20 13:50:51.062312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.108022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-20 13:50:51.108099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.268 [2024-11-20 13:50:51.108119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-20 13:50:51.108131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.108214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-20 13:50:51.108230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.268 [2024-11-20 13:50:51.108242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-20 13:50:51.108253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.108381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-20 13:50:51.108402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.268 [2024-11-20 13:50:51.108421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-20 13:50:51.108433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.108456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-20 13:50:51.108471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.268 [2024-11-20 13:50:51.108482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-20 13:50:51.108493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-20 13:50:51.221077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-20 13:50:51.221152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.268 [2024-11-20 13:50:51.221181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-20 13:50:51.221194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.324677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.324776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.527 [2024-11-20 13:50:51.324811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.324834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.325020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.325056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.527 [2024-11-20 13:50:51.325079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.325129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.325266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.325311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.527 [2024-11-20 13:50:51.325335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.325362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.325543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.325590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.527 [2024-11-20 13:50:51.325618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.325639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.325735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.325780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.527 [2024-11-20 13:50:51.325806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.325826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.325916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.325953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.527 [2024-11-20 13:50:51.325976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.325996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.326087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.527 [2024-11-20 13:50:51.326125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.527 [2024-11-20 13:50:51.326148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.527 [2024-11-20 13:50:51.326168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.527 [2024-11-20 13:50:51.326384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 587.111 ms, result 0 00:31:00.903 00:31:00.903 00:31:00.903 13:50:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:00.903 [2024-11-20 13:50:52.794225] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:31:00.903 [2024-11-20 13:50:52.794378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80693 ] 00:31:01.161 [2024-11-20 13:50:53.015906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.161 [2024-11-20 13:50:53.161146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.727 [2024-11-20 13:50:53.509563] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:01.727 [2024-11-20 13:50:53.509664] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:01.727 [2024-11-20 13:50:53.670465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.670530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:01.727 [2024-11-20 13:50:53.670556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:01.727 [2024-11-20 13:50:53.670569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.670637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.670655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.727 [2024-11-20 13:50:53.670672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:01.727 [2024-11-20 13:50:53.670685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.670725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:01.727 [2024-11-20 13:50:53.671652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:01.727 [2024-11-20 13:50:53.671704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.671724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.727 [2024-11-20 13:50:53.671739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:31:01.727 [2024-11-20 13:50:53.671750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.672832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:01.727 [2024-11-20 13:50:53.689012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.689063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:01.727 [2024-11-20 13:50:53.689082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.180 ms 00:31:01.727 [2024-11-20 13:50:53.689095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.689184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.689212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:01.727 [2024-11-20 13:50:53.689236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:01.727 [2024-11-20 13:50:53.689256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.693975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.694029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.727 [2024-11-20 13:50:53.694048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:31:01.727 [2024-11-20 13:50:53.694070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.694196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.694218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.727 [2024-11-20 13:50:53.694237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:01.727 [2024-11-20 13:50:53.694271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.694353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.694373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:01.727 [2024-11-20 13:50:53.694386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:01.727 [2024-11-20 13:50:53.694398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.694442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:01.727 [2024-11-20 13:50:53.699273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.699315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.727 [2024-11-20 13:50:53.699332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.849 ms 00:31:01.727 [2024-11-20 13:50:53.699351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.699409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.727 [2024-11-20 13:50:53.699427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:01.727 [2024-11-20 13:50:53.699441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:01.727 [2024-11-20 13:50:53.699452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.727 [2024-11-20 13:50:53.699533] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:01.727 [2024-11-20 13:50:53.699579] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:01.728 [2024-11-20 13:50:53.699637] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:01.728 [2024-11-20 13:50:53.699665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:01.728 [2024-11-20 13:50:53.699796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:01.728 [2024-11-20 13:50:53.699817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:01.728 [2024-11-20 13:50:53.699832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:01.728 [2024-11-20 13:50:53.699848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:01.728 [2024-11-20 13:50:53.699861] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:01.728 [2024-11-20 13:50:53.699897] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:01.728 [2024-11-20 13:50:53.699910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:01.728 [2024-11-20 13:50:53.699921] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:01.728 [2024-11-20 13:50:53.699938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:01.728 [2024-11-20 13:50:53.699956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.728 [2024-11-20 13:50:53.699976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:01.728 [2024-11-20 13:50:53.699997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:31:01.728 [2024-11-20 13:50:53.700009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.728 [2024-11-20 13:50:53.700124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.728 [2024-11-20 13:50:53.700143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:01.728 [2024-11-20 13:50:53.700156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:01.728 [2024-11-20 13:50:53.700168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.728 [2024-11-20 13:50:53.700315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:01.728 [2024-11-20 13:50:53.700339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:01.728 [2024-11-20 13:50:53.700352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:01.728 [2024-11-20 13:50:53.700406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:01.728 [2024-11-20 13:50:53.700462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.728 [2024-11-20 13:50:53.700484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:01.728 [2024-11-20 13:50:53.700495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:01.728 [2024-11-20 13:50:53.700505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.728 [2024-11-20 13:50:53.700516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:01.728 [2024-11-20 13:50:53.700527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:01.728 [2024-11-20 13:50:53.700551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:01.728 [2024-11-20 13:50:53.700584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:01.728 [2024-11-20 13:50:53.700626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:01.728 [2024-11-20 13:50:53.700676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:01.728 [2024-11-20 13:50:53.700709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:01.728 [2024-11-20 13:50:53.700741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:01.728 [2024-11-20 13:50:53.700774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.728 [2024-11-20 13:50:53.700812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:01.728 [2024-11-20 13:50:53.700831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:01.728 [2024-11-20 13:50:53.700849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.728 [2024-11-20 13:50:53.700860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:01.728 [2024-11-20 13:50:53.700890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:01.728 [2024-11-20 13:50:53.700902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:01.728 [2024-11-20 13:50:53.700924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:01.728 [2024-11-20 13:50:53.700935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:01.728 [2024-11-20 13:50:53.700957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:01.728 [2024-11-20 13:50:53.700968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.728 [2024-11-20 13:50:53.700979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.728 [2024-11-20 13:50:53.700991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:01.728 [2024-11-20 13:50:53.701005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:01.728 [2024-11-20 13:50:53.701024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:01.728 [2024-11-20 13:50:53.701043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:01.728 [2024-11-20 13:50:53.701060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:01.728 [2024-11-20 13:50:53.701080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:01.728 [2024-11-20 13:50:53.701095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:01.728 [2024-11-20 13:50:53.701109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:01.728 [2024-11-20 13:50:53.701136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:01.728 [2024-11-20 13:50:53.701152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:01.728 [2024-11-20 13:50:53.701164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:01.728 [2024-11-20 13:50:53.701176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:01.728 [2024-11-20 13:50:53.701187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:01.728 [2024-11-20 13:50:53.701201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:01.728 [2024-11-20 13:50:53.701221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:01.728 [2024-11-20 13:50:53.701239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:01.728 [2024-11-20 13:50:53.701251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:01.728 [2024-11-20 13:50:53.701335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:01.728 [2024-11-20 13:50:53.701354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.728 [2024-11-20 13:50:53.701379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:01.728 [2024-11-20 13:50:53.701391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:01.728 [2024-11-20 13:50:53.701402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:01.728 [2024-11-20 13:50:53.701416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.728 [2024-11-20 13:50:53.701431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:01.728 [2024-11-20 13:50:53.701452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:31:01.729 [2024-11-20 13:50:53.701471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.729 [2024-11-20 13:50:53.738799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.729 [2024-11-20 13:50:53.738883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.729 [2024-11-20 13:50:53.738908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.253 ms 00:31:01.729 [2024-11-20 13:50:53.738921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.729 [2024-11-20 13:50:53.739062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.729 [2024-11-20 13:50:53.739082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:01.729 [2024-11-20 13:50:53.739095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:01.729 [2024-11-20 13:50:53.739107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.801810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.801886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:01.988 [2024-11-20 13:50:53.801914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.589 ms 00:31:01.988 [2024-11-20 13:50:53.801938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.802041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.802066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:01.988 [2024-11-20 13:50:53.802098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:01.988 [2024-11-20 13:50:53.802119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.802604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.802634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:01.988 [2024-11-20 13:50:53.802650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:31:01.988 [2024-11-20 13:50:53.802661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.802843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.802906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:01.988 [2024-11-20 13:50:53.802936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:31:01.988 [2024-11-20 13:50:53.802970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.821218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.821278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:01.988 [2024-11-20 13:50:53.821302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.199 ms 00:31:01.988 [2024-11-20 13:50:53.821315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.837786] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:01.988 [2024-11-20 13:50:53.837840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:01.988 [2024-11-20 13:50:53.837861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.837886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:01.988 [2024-11-20 13:50:53.837901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:31:01.988 [2024-11-20 13:50:53.837912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.868101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.868174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:01.988 [2024-11-20 13:50:53.868195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.126 ms 00:31:01.988 [2024-11-20 13:50:53.868209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.884178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.884242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:01.988 [2024-11-20 13:50:53.884261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.913 ms 00:31:01.988 [2024-11-20 13:50:53.884272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.900603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.900666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:01.988 [2024-11-20 13:50:53.900705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.277 ms 00:31:01.988 [2024-11-20 13:50:53.900725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.901678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.901722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:01.988 [2024-11-20 13:50:53.901749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:31:01.988 [2024-11-20 13:50:53.901779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.980482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.980554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:01.988 [2024-11-20 13:50:53.980584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.660 ms 00:31:01.988 [2024-11-20 13:50:53.980596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.993508] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:01.988 [2024-11-20 13:50:53.996267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.996311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:01.988 [2024-11-20 13:50:53.996340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.590 ms 00:31:01.988 [2024-11-20 13:50:53.996361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.996529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.996573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:01.988 [2024-11-20 13:50:53.996597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:01.988 [2024-11-20 13:50:53.996624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.998312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.998356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:01.988 [2024-11-20 13:50:53.998381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:31:01.988 [2024-11-20 13:50:53.998402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.998460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.998487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:01.988 [2024-11-20 13:50:53.998510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:01.988 [2024-11-20 13:50:53.998529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.988 [2024-11-20 13:50:53.998604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:01.988 [2024-11-20 13:50:53.998634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.988 [2024-11-20 13:50:53.998655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:01.988 [2024-11-20 13:50:53.998677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:01.988 [2024-11-20 13:50:53.998696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.246 [2024-11-20 13:50:54.030838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.246 [2024-11-20 13:50:54.030927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:02.246 [2024-11-20 13:50:54.030957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.080 ms 00:31:02.246 [2024-11-20 13:50:54.030988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.246 [2024-11-20 13:50:54.031127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.246 [2024-11-20 13:50:54.031156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:02.246 [2024-11-20 13:50:54.031180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:02.246 [2024-11-20 13:50:54.031198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.246 [2024-11-20 13:50:54.033945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.144 ms, result 0 00:31:03.622  [2024-11-20T13:50:56.648Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T13:50:57.581Z] Copying: 48/1024 [MB] (25 MBps) [2024-11-20T13:50:58.514Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-20T13:50:59.448Z] Copying: 98/1024 [MB] (25 MBps) [2024-11-20T13:51:00.381Z] Copying: 127/1024 [MB] (28 MBps) [2024-11-20T13:51:01.317Z] Copying: 152/1024 [MB] (25 MBps) [2024-11-20T13:51:02.692Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-20T13:51:03.626Z] Copying: 193/1024 [MB] (23 MBps) [2024-11-20T13:51:04.559Z] Copying: 215/1024 [MB] (22 MBps) [2024-11-20T13:51:05.492Z] Copying: 235/1024 [MB] (19 MBps) [2024-11-20T13:51:06.426Z] Copying: 262/1024 [MB] (27 MBps) [2024-11-20T13:51:07.357Z] Copying: 285/1024 [MB] (23 MBps) [2024-11-20T13:51:08.296Z] Copying: 307/1024 [MB] (22 MBps) [2024-11-20T13:51:09.670Z] Copying: 328/1024 [MB] (20 MBps) [2024-11-20T13:51:10.605Z] Copying: 348/1024 [MB] (20 MBps) [2024-11-20T13:51:11.270Z] Copying: 370/1024 [MB] (21 MBps) [2024-11-20T13:51:12.645Z] Copying: 393/1024 [MB] (23 MBps) [2024-11-20T13:51:13.579Z] Copying: 415/1024 [MB] (21 MBps) [2024-11-20T13:51:14.512Z] Copying: 438/1024 [MB] (23 MBps) [2024-11-20T13:51:15.447Z] Copying: 460/1024 [MB] (22 MBps) [2024-11-20T13:51:16.382Z] Copying: 484/1024 [MB] (23 MBps) [2024-11-20T13:51:17.315Z] Copying: 509/1024 [MB] (25 MBps) [2024-11-20T13:51:18.691Z] Copying: 535/1024 [MB] (25 MBps) [2024-11-20T13:51:19.626Z] Copying: 558/1024 [MB] (22 MBps) [2024-11-20T13:51:20.562Z] Copying: 585/1024 [MB] (27 MBps) [2024-11-20T13:51:21.495Z] Copying: 612/1024 [MB] (26 MBps) [2024-11-20T13:51:22.432Z] Copying: 639/1024 [MB] (27 MBps) [2024-11-20T13:51:23.365Z] Copying: 665/1024 [MB] (26 MBps) [2024-11-20T13:51:24.298Z] Copying: 692/1024 [MB] (27 MBps) [2024-11-20T13:51:25.671Z] Copying: 718/1024 [MB] (25 MBps) [2024-11-20T13:51:26.616Z] Copying: 743/1024 [MB] (25 MBps) [2024-11-20T13:51:27.554Z] Copying: 770/1024 [MB] (26 MBps) [2024-11-20T13:51:28.493Z] Copying: 798/1024 [MB] (27 MBps) [2024-11-20T13:51:29.429Z] Copying: 823/1024 [MB] (25 MBps) [2024-11-20T13:51:30.366Z] Copying: 852/1024 [MB] (28 MBps) [2024-11-20T13:51:31.302Z] Copying: 881/1024 [MB] (28 MBps) [2024-11-20T13:51:32.677Z] Copying: 909/1024 [MB] (27 MBps) [2024-11-20T13:51:33.611Z] Copying: 936/1024 [MB] (27 MBps) [2024-11-20T13:51:34.543Z] Copying: 961/1024 [MB] (24 MBps) [2024-11-20T13:51:35.491Z] Copying: 986/1024 [MB] (25 MBps) [2024-11-20T13:51:35.750Z] Copying: 1013/1024 [MB] (27 MBps) [2024-11-20T13:51:36.315Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 13:51:36.035651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.276 [2024-11-20 13:51:36.035752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:44.276 [2024-11-20 13:51:36.035784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:44.277 [2024-11-20 13:51:36.035824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.035865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:44.277 [2024-11-20 13:51:36.041329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.041372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:44.277 [2024-11-20 13:51:36.041391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.417 ms 00:31:44.277 [2024-11-20 13:51:36.041407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.041707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.041736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:44.277 [2024-11-20 13:51:36.041754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:31:44.277 [2024-11-20 13:51:36.041768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.046910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.046951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:44.277 [2024-11-20 13:51:36.046966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:31:44.277 [2024-11-20 13:51:36.046979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.054214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.054248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:44.277 [2024-11-20 13:51:36.054263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.190 ms 00:31:44.277 [2024-11-20 13:51:36.054274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.087073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.087133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:44.277 [2024-11-20 13:51:36.087152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.734 ms 00:31:44.277 [2024-11-20 13:51:36.087164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.105848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.105927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:44.277 [2024-11-20 13:51:36.105946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.623 ms 00:31:44.277 [2024-11-20 13:51:36.105959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.188948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.189040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:44.277 [2024-11-20 13:51:36.189061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.919 ms 00:31:44.277 [2024-11-20 13:51:36.189073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.222342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.222411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:44.277 [2024-11-20 13:51:36.222431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.240 ms 00:31:44.277 [2024-11-20 13:51:36.222442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.254521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.254627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:44.277 [2024-11-20 13:51:36.254682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.991 ms 00:31:44.277 [2024-11-20 13:51:36.254696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.277 [2024-11-20 13:51:36.288203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.277 [2024-11-20 13:51:36.288287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:44.277 [2024-11-20 13:51:36.288306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.382 ms 00:31:44.277 [2024-11-20 13:51:36.288319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.536 [2024-11-20 13:51:36.321624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.536 [2024-11-20 13:51:36.321712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:44.536 [2024-11-20 13:51:36.321733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.128 ms 00:31:44.536 [2024-11-20 13:51:36.321745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.536 [2024-11-20 13:51:36.321831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:44.536 [2024-11-20 13:51:36.321858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:44.536 [2024-11-20 13:51:36.321886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.321998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:44.536 [2024-11-20 13:51:36.322466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.322993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:44.537 [2024-11-20 13:51:36.323108] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:44.537 [2024-11-20 13:51:36.323120] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75407c1e-ca5b-4724-90c3-ab5917c4cf24 00:31:44.537 [2024-11-20 13:51:36.323132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:44.537 [2024-11-20 13:51:36.323143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5824 00:31:44.537 [2024-11-20 13:51:36.323154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4864 00:31:44.537 [2024-11-20 13:51:36.323166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1974 00:31:44.537 [2024-11-20 13:51:36.323177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:44.537 [2024-11-20 13:51:36.323199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:44.537 [2024-11-20 13:51:36.323220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:44.537 [2024-11-20 13:51:36.323246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:44.537 [2024-11-20 13:51:36.323257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:44.537 [2024-11-20 13:51:36.323268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.537 [2024-11-20 13:51:36.323280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:44.537 [2024-11-20 13:51:36.323292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:31:44.537 [2024-11-20 13:51:36.323303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.340619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.537 [2024-11-20 13:51:36.340695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:44.537 [2024-11-20 13:51:36.340714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.246 ms 00:31:44.537 [2024-11-20 13:51:36.340741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.341228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.537 [2024-11-20 13:51:36.341251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:44.537 [2024-11-20 13:51:36.341265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:31:44.537 [2024-11-20 13:51:36.341276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.385492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.537 [2024-11-20 13:51:36.385569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.537 [2024-11-20 13:51:36.385588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.537 [2024-11-20 13:51:36.385600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.385683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.537 [2024-11-20 13:51:36.385698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.537 [2024-11-20 13:51:36.385710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.537 [2024-11-20 13:51:36.385722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.385859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.537 [2024-11-20 13:51:36.385896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.537 [2024-11-20 13:51:36.385917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.537 [2024-11-20 13:51:36.385928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.385952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.537 [2024-11-20 13:51:36.385966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.537 [2024-11-20 13:51:36.385978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.537 [2024-11-20 13:51:36.385989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.537 [2024-11-20 13:51:36.491571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.537 [2024-11-20 13:51:36.491653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:44.537 [2024-11-20 13:51:36.491688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.537 [2024-11-20 13:51:36.491701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.795 [2024-11-20 13:51:36.580069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.795 [2024-11-20 13:51:36.580142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:44.795 [2024-11-20 13:51:36.580162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.795 [2024-11-20 13:51:36.580175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:44.796 [2024-11-20 13:51:36.580317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:44.796 [2024-11-20 13:51:36.580419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:44.796 [2024-11-20 13:51:36.580594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:44.796 [2024-11-20 13:51:36.580692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:44.796 [2024-11-20 13:51:36.580774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.580841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.796 [2024-11-20 13:51:36.580858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:44.796 [2024-11-20 13:51:36.580897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.796 [2024-11-20 13:51:36.580911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.796 [2024-11-20 13:51:36.581067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.378 ms, result 0 00:31:45.729 00:31:45.729 00:31:45.729 13:51:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:48.261 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:48.261 Process with pid 79223 is not found 00:31:48.261 Remove shared memory files 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79223 00:31:48.261 13:51:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79223 ']' 00:31:48.261 13:51:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79223 00:31:48.261 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79223) - No such process 00:31:48.261 13:51:39 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79223 is not found' 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:48.261 13:51:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:48.261 ************************************ 00:31:48.261 END TEST ftl_restore 00:31:48.261 ************************************ 00:31:48.261 00:31:48.261 real 3m13.779s 00:31:48.261 user 2m58.229s 00:31:48.261 sys 0m18.098s 00:31:48.261 13:51:39 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.261 13:51:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:48.261 13:51:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:48.261 13:51:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:48.261 13:51:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.261 13:51:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:48.261 ************************************ 00:31:48.261 START TEST ftl_dirty_shutdown 00:31:48.261 ************************************ 00:31:48.261 13:51:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:48.261 * Looking for test storage... 00:31:48.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.261 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.262 --rc genhtml_branch_coverage=1 00:31:48.262 --rc genhtml_function_coverage=1 00:31:48.262 --rc genhtml_legend=1 00:31:48.262 --rc geninfo_all_blocks=1 00:31:48.262 --rc geninfo_unexecuted_blocks=1 00:31:48.262 00:31:48.262 ' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.262 --rc genhtml_branch_coverage=1 00:31:48.262 --rc genhtml_function_coverage=1 00:31:48.262 --rc genhtml_legend=1 00:31:48.262 --rc geninfo_all_blocks=1 00:31:48.262 --rc geninfo_unexecuted_blocks=1 00:31:48.262 00:31:48.262 ' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.262 --rc genhtml_branch_coverage=1 00:31:48.262 --rc genhtml_function_coverage=1 00:31:48.262 --rc genhtml_legend=1 00:31:48.262 --rc geninfo_all_blocks=1 00:31:48.262 --rc geninfo_unexecuted_blocks=1 00:31:48.262 00:31:48.262 ' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.262 --rc genhtml_branch_coverage=1 00:31:48.262 --rc genhtml_function_coverage=1 00:31:48.262 --rc genhtml_legend=1 00:31:48.262 --rc geninfo_all_blocks=1 00:31:48.262 --rc geninfo_unexecuted_blocks=1 00:31:48.262 00:31:48.262 ' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:48.262 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81226 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81226 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81226 ']' 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.263 13:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:48.522 [2024-11-20 13:51:40.364698] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:31:48.522 [2024-11-20 13:51:40.364964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81226 ] 00:31:48.522 [2024-11-20 13:51:40.553771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.780 [2024-11-20 13:51:40.680059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:49.716 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:49.975 13:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:50.233 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:50.233 { 00:31:50.233 "name": "nvme0n1", 00:31:50.233 "aliases": [ 00:31:50.233 "573418af-1028-4003-b951-dc2cd14d9354" 00:31:50.233 ], 00:31:50.233 "product_name": "NVMe disk", 00:31:50.233 "block_size": 4096, 00:31:50.233 "num_blocks": 1310720, 00:31:50.233 "uuid": "573418af-1028-4003-b951-dc2cd14d9354", 00:31:50.233 "numa_id": -1, 00:31:50.233 "assigned_rate_limits": { 00:31:50.233 "rw_ios_per_sec": 0, 00:31:50.233 "rw_mbytes_per_sec": 0, 00:31:50.233 "r_mbytes_per_sec": 0, 00:31:50.233 "w_mbytes_per_sec": 0 00:31:50.233 }, 00:31:50.233 "claimed": true, 00:31:50.233 "claim_type": "read_many_write_one", 00:31:50.233 "zoned": false, 00:31:50.233 "supported_io_types": { 00:31:50.233 "read": true, 00:31:50.233 "write": true, 00:31:50.233 "unmap": true, 00:31:50.233 "flush": true, 00:31:50.233 "reset": true, 00:31:50.233 "nvme_admin": true, 00:31:50.233 "nvme_io": true, 00:31:50.233 "nvme_io_md": false, 00:31:50.233 "write_zeroes": true, 00:31:50.233 "zcopy": false, 00:31:50.233 "get_zone_info": false, 00:31:50.233 "zone_management": false, 00:31:50.233 "zone_append": false, 00:31:50.233 "compare": true, 00:31:50.233 "compare_and_write": false, 00:31:50.233 "abort": true, 00:31:50.233 "seek_hole": false, 00:31:50.233 "seek_data": false, 00:31:50.233 "copy": true, 00:31:50.233 "nvme_iov_md": false 00:31:50.233 }, 00:31:50.233 "driver_specific": { 00:31:50.233 "nvme": [ 00:31:50.233 { 00:31:50.233 "pci_address": "0000:00:11.0", 00:31:50.233 "trid": { 00:31:50.233 "trtype": "PCIe", 00:31:50.233 "traddr": "0000:00:11.0" 00:31:50.233 }, 00:31:50.233 "ctrlr_data": { 00:31:50.233 "cntlid": 0, 00:31:50.233 "vendor_id": "0x1b36", 00:31:50.233 "model_number": "QEMU NVMe Ctrl", 00:31:50.233 "serial_number": "12341", 00:31:50.233 "firmware_revision": "8.0.0", 00:31:50.233 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:50.233 "oacs": { 00:31:50.233 "security": 0, 00:31:50.233 "format": 1, 00:31:50.233 "firmware": 0, 00:31:50.233 "ns_manage": 1 00:31:50.233 }, 00:31:50.233 "multi_ctrlr": false, 00:31:50.233 "ana_reporting": false 00:31:50.233 }, 00:31:50.233 "vs": { 00:31:50.233 "nvme_version": "1.4" 00:31:50.233 }, 00:31:50.233 "ns_data": { 00:31:50.233 "id": 1, 00:31:50.233 "can_share": false 00:31:50.233 } 00:31:50.233 } 00:31:50.233 ], 00:31:50.233 "mp_policy": "active_passive" 00:31:50.233 } 00:31:50.233 } 00:31:50.233 ]' 00:31:50.233 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:50.233 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:50.233 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.492 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:50.751 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=311c3ae9-f81e-42a8-b2b0-160707218a73 00:31:50.751 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:50.751 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 311c3ae9-f81e-42a8-b2b0-160707218a73 00:31:51.009 13:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:51.268 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=21310622-fff6-4544-87c2-a0b500c89d5e 00:31:51.268 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 21310622-fff6-4544-87c2-a0b500c89d5e 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:51.528 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.094 { 00:31:52.094 "name": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:52.094 "aliases": [ 00:31:52.094 "lvs/nvme0n1p0" 00:31:52.094 ], 00:31:52.094 "product_name": "Logical Volume", 00:31:52.094 "block_size": 4096, 00:31:52.094 "num_blocks": 26476544, 00:31:52.094 "uuid": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:52.094 "assigned_rate_limits": { 00:31:52.094 "rw_ios_per_sec": 0, 00:31:52.094 "rw_mbytes_per_sec": 0, 00:31:52.094 "r_mbytes_per_sec": 0, 00:31:52.094 "w_mbytes_per_sec": 0 00:31:52.094 }, 00:31:52.094 "claimed": false, 00:31:52.094 "zoned": false, 00:31:52.094 "supported_io_types": { 00:31:52.094 "read": true, 00:31:52.094 "write": true, 00:31:52.094 "unmap": true, 00:31:52.094 "flush": false, 00:31:52.094 "reset": true, 00:31:52.094 "nvme_admin": false, 00:31:52.094 "nvme_io": false, 00:31:52.094 "nvme_io_md": false, 00:31:52.094 "write_zeroes": true, 00:31:52.094 "zcopy": false, 00:31:52.094 "get_zone_info": false, 00:31:52.094 "zone_management": false, 00:31:52.094 "zone_append": false, 00:31:52.094 "compare": false, 00:31:52.094 "compare_and_write": false, 00:31:52.094 "abort": false, 00:31:52.094 "seek_hole": true, 00:31:52.094 "seek_data": true, 00:31:52.094 "copy": false, 00:31:52.094 "nvme_iov_md": false 00:31:52.094 }, 00:31:52.094 "driver_specific": { 00:31:52.094 "lvol": { 00:31:52.094 "lvol_store_uuid": "21310622-fff6-4544-87c2-a0b500c89d5e", 00:31:52.094 "base_bdev": "nvme0n1", 00:31:52.094 "thin_provision": true, 00:31:52.094 "num_allocated_clusters": 0, 00:31:52.094 "snapshot": false, 00:31:52.094 "clone": false, 00:31:52.094 "esnap_clone": false 00:31:52.094 } 00:31:52.094 } 00:31:52.094 } 00:31:52.094 ]' 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:52.094 13:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:52.353 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:52.611 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.611 { 00:31:52.611 "name": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:52.611 "aliases": [ 00:31:52.611 "lvs/nvme0n1p0" 00:31:52.611 ], 00:31:52.611 "product_name": "Logical Volume", 00:31:52.611 "block_size": 4096, 00:31:52.611 "num_blocks": 26476544, 00:31:52.611 "uuid": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:52.611 "assigned_rate_limits": { 00:31:52.611 "rw_ios_per_sec": 0, 00:31:52.611 "rw_mbytes_per_sec": 0, 00:31:52.611 "r_mbytes_per_sec": 0, 00:31:52.611 "w_mbytes_per_sec": 0 00:31:52.611 }, 00:31:52.611 "claimed": false, 00:31:52.611 "zoned": false, 00:31:52.611 "supported_io_types": { 00:31:52.611 "read": true, 00:31:52.611 "write": true, 00:31:52.612 "unmap": true, 00:31:52.612 "flush": false, 00:31:52.612 "reset": true, 00:31:52.612 "nvme_admin": false, 00:31:52.612 "nvme_io": false, 00:31:52.612 "nvme_io_md": false, 00:31:52.612 "write_zeroes": true, 00:31:52.612 "zcopy": false, 00:31:52.612 "get_zone_info": false, 00:31:52.612 "zone_management": false, 00:31:52.612 "zone_append": false, 00:31:52.612 "compare": false, 00:31:52.612 "compare_and_write": false, 00:31:52.612 "abort": false, 00:31:52.612 "seek_hole": true, 00:31:52.612 "seek_data": true, 00:31:52.612 "copy": false, 00:31:52.612 "nvme_iov_md": false 00:31:52.612 }, 00:31:52.612 "driver_specific": { 00:31:52.612 "lvol": { 00:31:52.612 "lvol_store_uuid": "21310622-fff6-4544-87c2-a0b500c89d5e", 00:31:52.612 "base_bdev": "nvme0n1", 00:31:52.612 "thin_provision": true, 00:31:52.612 "num_allocated_clusters": 0, 00:31:52.612 "snapshot": false, 00:31:52.612 "clone": false, 00:31:52.612 "esnap_clone": false 00:31:52.612 } 00:31:52.612 } 00:31:52.612 } 00:31:52.612 ]' 00:31:52.612 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:52.871 13:51:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:53.130 13:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:53.389 { 00:31:53.389 "name": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:53.389 "aliases": [ 00:31:53.389 "lvs/nvme0n1p0" 00:31:53.389 ], 00:31:53.389 "product_name": "Logical Volume", 00:31:53.389 "block_size": 4096, 00:31:53.389 "num_blocks": 26476544, 00:31:53.389 "uuid": "b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d", 00:31:53.389 "assigned_rate_limits": { 00:31:53.389 "rw_ios_per_sec": 0, 00:31:53.389 "rw_mbytes_per_sec": 0, 00:31:53.389 "r_mbytes_per_sec": 0, 00:31:53.389 "w_mbytes_per_sec": 0 00:31:53.389 }, 00:31:53.389 "claimed": false, 00:31:53.389 "zoned": false, 00:31:53.389 "supported_io_types": { 00:31:53.389 "read": true, 00:31:53.389 "write": true, 00:31:53.389 "unmap": true, 00:31:53.389 "flush": false, 00:31:53.389 "reset": true, 00:31:53.389 "nvme_admin": false, 00:31:53.389 "nvme_io": false, 00:31:53.389 "nvme_io_md": false, 00:31:53.389 "write_zeroes": true, 00:31:53.389 "zcopy": false, 00:31:53.389 "get_zone_info": false, 00:31:53.389 "zone_management": false, 00:31:53.389 "zone_append": false, 00:31:53.389 "compare": false, 00:31:53.389 "compare_and_write": false, 00:31:53.389 "abort": false, 00:31:53.389 "seek_hole": true, 00:31:53.389 "seek_data": true, 00:31:53.389 "copy": false, 00:31:53.389 "nvme_iov_md": false 00:31:53.389 }, 00:31:53.389 "driver_specific": { 00:31:53.389 "lvol": { 00:31:53.389 "lvol_store_uuid": "21310622-fff6-4544-87c2-a0b500c89d5e", 00:31:53.389 "base_bdev": "nvme0n1", 00:31:53.389 "thin_provision": true, 00:31:53.389 "num_allocated_clusters": 0, 00:31:53.389 "snapshot": false, 00:31:53.389 "clone": false, 00:31:53.389 "esnap_clone": false 00:31:53.389 } 00:31:53.389 } 00:31:53.389 } 00:31:53.389 ]' 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d --l2p_dram_limit 10' 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:53.389 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:53.390 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:53.390 13:51:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b8b1c290-c5de-4a4c-a26b-1aa7eb084a5d --l2p_dram_limit 10 -c nvc0n1p0 00:31:53.649 [2024-11-20 13:51:45.632748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.632814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:53.649 [2024-11-20 13:51:45.632840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:53.649 [2024-11-20 13:51:45.632855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.632955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.632977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:53.649 [2024-11-20 13:51:45.632993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:53.649 [2024-11-20 13:51:45.633009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.633074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:53.649 [2024-11-20 13:51:45.634081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:53.649 [2024-11-20 13:51:45.634130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.634146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:53.649 [2024-11-20 13:51:45.634163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:31:53.649 [2024-11-20 13:51:45.634175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.634319] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f1fd82f-8fcb-4345-b093-a1e5a769d63e 00:31:53.649 [2024-11-20 13:51:45.635426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.635470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:53.649 [2024-11-20 13:51:45.635491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:53.649 [2024-11-20 13:51:45.635515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.640134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.640198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:53.649 [2024-11-20 13:51:45.640216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:31:53.649 [2024-11-20 13:51:45.640231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.640356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.640380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:53.649 [2024-11-20 13:51:45.640394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:53.649 [2024-11-20 13:51:45.640415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.640497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.640521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:53.649 [2024-11-20 13:51:45.640535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:53.649 [2024-11-20 13:51:45.640554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.640589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:53.649 [2024-11-20 13:51:45.645150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.645193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:53.649 [2024-11-20 13:51:45.645215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:31:53.649 [2024-11-20 13:51:45.645239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.645288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.649 [2024-11-20 13:51:45.645306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:53.649 [2024-11-20 13:51:45.645321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:53.649 [2024-11-20 13:51:45.645335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.649 [2024-11-20 13:51:45.645399] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:53.649 [2024-11-20 13:51:45.645561] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:53.649 [2024-11-20 13:51:45.645585] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:53.649 [2024-11-20 13:51:45.645602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:53.649 [2024-11-20 13:51:45.645621] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:53.650 [2024-11-20 13:51:45.645636] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:53.650 [2024-11-20 13:51:45.645653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:53.650 [2024-11-20 13:51:45.645668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:53.650 [2024-11-20 13:51:45.645686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:53.650 [2024-11-20 13:51:45.645699] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:53.650 [2024-11-20 13:51:45.645714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.650 [2024-11-20 13:51:45.645726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:53.650 [2024-11-20 13:51:45.645742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:31:53.650 [2024-11-20 13:51:45.645766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.650 [2024-11-20 13:51:45.645886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.650 [2024-11-20 13:51:45.645905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:53.650 [2024-11-20 13:51:45.645922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:53.650 [2024-11-20 13:51:45.645935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.650 [2024-11-20 13:51:45.646061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:53.650 [2024-11-20 13:51:45.646092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:53.650 [2024-11-20 13:51:45.646111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:53.650 [2024-11-20 13:51:45.646152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:53.650 [2024-11-20 13:51:45.646194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.650 [2024-11-20 13:51:45.646224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:53.650 [2024-11-20 13:51:45.646237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:53.650 [2024-11-20 13:51:45.646251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.650 [2024-11-20 13:51:45.646263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:53.650 [2024-11-20 13:51:45.646278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:53.650 [2024-11-20 13:51:45.646290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:53.650 [2024-11-20 13:51:45.646319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:53.650 [2024-11-20 13:51:45.646362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:53.650 [2024-11-20 13:51:45.646402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:53.650 [2024-11-20 13:51:45.646442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:53.650 [2024-11-20 13:51:45.646480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:53.650 [2024-11-20 13:51:45.646522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.650 [2024-11-20 13:51:45.646549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:53.650 [2024-11-20 13:51:45.646561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:53.650 [2024-11-20 13:51:45.646575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.650 [2024-11-20 13:51:45.646588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:53.650 [2024-11-20 13:51:45.646602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:53.650 [2024-11-20 13:51:45.646614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:53.650 [2024-11-20 13:51:45.646641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:53.650 [2024-11-20 13:51:45.646655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646668] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:53.650 [2024-11-20 13:51:45.646683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:53.650 [2024-11-20 13:51:45.646695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.650 [2024-11-20 13:51:45.646726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:53.650 [2024-11-20 13:51:45.646755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:53.650 [2024-11-20 13:51:45.646768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:53.650 [2024-11-20 13:51:45.646783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:53.650 [2024-11-20 13:51:45.646795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:53.650 [2024-11-20 13:51:45.646810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:53.650 [2024-11-20 13:51:45.646827] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:53.650 [2024-11-20 13:51:45.646846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.650 [2024-11-20 13:51:45.646863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:53.650 [2024-11-20 13:51:45.646900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:53.650 [2024-11-20 13:51:45.646914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:53.650 [2024-11-20 13:51:45.646929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:53.650 [2024-11-20 13:51:45.646942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:53.650 [2024-11-20 13:51:45.646956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:53.650 [2024-11-20 13:51:45.646969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:53.650 [2024-11-20 13:51:45.646984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:53.650 [2024-11-20 13:51:45.646996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:53.650 [2024-11-20 13:51:45.647013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:53.650 [2024-11-20 13:51:45.647026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:53.650 [2024-11-20 13:51:45.647040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:53.650 [2024-11-20 13:51:45.647053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:53.650 [2024-11-20 13:51:45.647070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:53.650 [2024-11-20 13:51:45.647083] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:53.651 [2024-11-20 13:51:45.647099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.651 [2024-11-20 13:51:45.647113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:53.651 [2024-11-20 13:51:45.647128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:53.651 [2024-11-20 13:51:45.647141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:53.651 [2024-11-20 13:51:45.647155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:53.651 [2024-11-20 13:51:45.647169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.651 [2024-11-20 13:51:45.647184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:53.651 [2024-11-20 13:51:45.647197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:31:53.651 [2024-11-20 13:51:45.647212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.651 [2024-11-20 13:51:45.647268] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:53.651 [2024-11-20 13:51:45.647297] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:56.179 [2024-11-20 13:51:47.609936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.610028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:56.179 [2024-11-20 13:51:47.610069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1962.679 ms 00:31:56.179 [2024-11-20 13:51:47.610099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.643499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.643590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:56.179 [2024-11-20 13:51:47.643625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.045 ms 00:31:56.179 [2024-11-20 13:51:47.643653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.643934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.643986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:56.179 [2024-11-20 13:51:47.644017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:31:56.179 [2024-11-20 13:51:47.644067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.684956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.685036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:56.179 [2024-11-20 13:51:47.685070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.787 ms 00:31:56.179 [2024-11-20 13:51:47.685098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.685192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.685238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:56.179 [2024-11-20 13:51:47.685265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:56.179 [2024-11-20 13:51:47.685294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.685744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.685789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:56.179 [2024-11-20 13:51:47.685818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:31:56.179 [2024-11-20 13:51:47.685844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.686083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.686118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:56.179 [2024-11-20 13:51:47.686137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:31:56.179 [2024-11-20 13:51:47.686155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.703707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.703772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:56.179 [2024-11-20 13:51:47.703793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:31:56.179 [2024-11-20 13:51:47.703809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.179 [2024-11-20 13:51:47.717349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:56.179 [2024-11-20 13:51:47.720035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.179 [2024-11-20 13:51:47.720201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:56.180 [2024-11-20 13:51:47.720238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:31:56.180 [2024-11-20 13:51:47.720254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.799560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.799632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:56.180 [2024-11-20 13:51:47.799658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.245 ms 00:31:56.180 [2024-11-20 13:51:47.799673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.799931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.799966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:56.180 [2024-11-20 13:51:47.799988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:31:56.180 [2024-11-20 13:51:47.800002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.831369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.831551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:56.180 [2024-11-20 13:51:47.831589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.280 ms 00:31:56.180 [2024-11-20 13:51:47.831605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.862387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.862571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:56.180 [2024-11-20 13:51:47.862607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.709 ms 00:31:56.180 [2024-11-20 13:51:47.862622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.863409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.863446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:56.180 [2024-11-20 13:51:47.863467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:31:56.180 [2024-11-20 13:51:47.863483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.945424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.945667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:56.180 [2024-11-20 13:51:47.945725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.856 ms 00:31:56.180 [2024-11-20 13:51:47.945743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:47.981706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:47.981772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:56.180 [2024-11-20 13:51:47.981797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.786 ms 00:31:56.180 [2024-11-20 13:51:47.981811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:48.013721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:48.013780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:56.180 [2024-11-20 13:51:48.013803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.844 ms 00:31:56.180 [2024-11-20 13:51:48.013817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:48.045323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:48.045382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:56.180 [2024-11-20 13:51:48.045406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.419 ms 00:31:56.180 [2024-11-20 13:51:48.045421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:48.045487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:48.045507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:56.180 [2024-11-20 13:51:48.045527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:56.180 [2024-11-20 13:51:48.045541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:48.045670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.180 [2024-11-20 13:51:48.045690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:56.180 [2024-11-20 13:51:48.045711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:56.180 [2024-11-20 13:51:48.045724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.180 [2024-11-20 13:51:48.046939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2413.685 ms, result 0 00:31:56.180 { 00:31:56.180 "name": "ftl0", 00:31:56.180 "uuid": "1f1fd82f-8fcb-4345-b093-a1e5a769d63e" 00:31:56.180 } 00:31:56.180 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:56.180 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:56.439 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:56.439 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:56.439 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:56.698 /dev/nbd0 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:56.698 1+0 records in 00:31:56.698 1+0 records out 00:31:56.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378314 s, 10.8 MB/s 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:56.698 13:51:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:56.956 [2024-11-20 13:51:48.823356] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:31:56.956 [2024-11-20 13:51:48.823502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81364 ] 00:31:57.214 [2024-11-20 13:51:48.994201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.214 [2024-11-20 13:51:49.096557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.587  [2024-11-20T13:51:51.560Z] Copying: 163/1024 [MB] (163 MBps) [2024-11-20T13:51:52.493Z] Copying: 327/1024 [MB] (164 MBps) [2024-11-20T13:51:53.429Z] Copying: 495/1024 [MB] (167 MBps) [2024-11-20T13:51:54.806Z] Copying: 661/1024 [MB] (166 MBps) [2024-11-20T13:51:55.740Z] Copying: 826/1024 [MB] (165 MBps) [2024-11-20T13:51:55.740Z] Copying: 983/1024 [MB] (156 MBps) [2024-11-20T13:51:56.759Z] Copying: 1024/1024 [MB] (average 163 MBps) 00:32:04.720 00:32:04.720 13:51:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:07.252 13:51:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:32:07.252 [2024-11-20 13:51:59.031758] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:32:07.252 [2024-11-20 13:51:59.031956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81475 ] 00:32:07.252 [2024-11-20 13:51:59.249077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.514 [2024-11-20 13:51:59.385904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.888  [2024-11-20T13:52:01.863Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T13:52:02.836Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-20T13:52:03.770Z] Copying: 45/1024 [MB] (15 MBps) [2024-11-20T13:52:05.145Z] Copying: 59/1024 [MB] (14 MBps) [2024-11-20T13:52:06.079Z] Copying: 76/1024 [MB] (17 MBps) [2024-11-20T13:52:07.013Z] Copying: 91/1024 [MB] (15 MBps) [2024-11-20T13:52:07.948Z] Copying: 106/1024 [MB] (14 MBps) [2024-11-20T13:52:08.883Z] Copying: 123/1024 [MB] (17 MBps) [2024-11-20T13:52:09.859Z] Copying: 141/1024 [MB] (17 MBps) [2024-11-20T13:52:10.799Z] Copying: 158/1024 [MB] (17 MBps) [2024-11-20T13:52:12.174Z] Copying: 174/1024 [MB] (16 MBps) [2024-11-20T13:52:12.742Z] Copying: 191/1024 [MB] (16 MBps) [2024-11-20T13:52:14.117Z] Copying: 208/1024 [MB] (16 MBps) [2024-11-20T13:52:15.052Z] Copying: 225/1024 [MB] (16 MBps) [2024-11-20T13:52:15.989Z] Copying: 241/1024 [MB] (15 MBps) [2024-11-20T13:52:17.032Z] Copying: 257/1024 [MB] (16 MBps) [2024-11-20T13:52:17.964Z] Copying: 273/1024 [MB] (16 MBps) [2024-11-20T13:52:18.895Z] Copying: 289/1024 [MB] (15 MBps) [2024-11-20T13:52:19.827Z] Copying: 305/1024 [MB] (15 MBps) [2024-11-20T13:52:20.761Z] Copying: 323/1024 [MB] (17 MBps) [2024-11-20T13:52:22.133Z] Copying: 341/1024 [MB] (18 MBps) [2024-11-20T13:52:23.068Z] Copying: 359/1024 [MB] (18 MBps) [2024-11-20T13:52:24.003Z] Copying: 376/1024 [MB] (17 MBps) [2024-11-20T13:52:24.937Z] Copying: 393/1024 [MB] (16 MBps) [2024-11-20T13:52:25.873Z] Copying: 409/1024 [MB] (16 MBps) [2024-11-20T13:52:26.807Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-20T13:52:27.741Z] Copying: 443/1024 [MB] (16 MBps) [2024-11-20T13:52:29.117Z] Copying: 459/1024 [MB] (16 MBps) [2024-11-20T13:52:30.054Z] Copying: 474/1024 [MB] (15 MBps) [2024-11-20T13:52:30.992Z] Copying: 491/1024 [MB] (16 MBps) [2024-11-20T13:52:31.929Z] Copying: 507/1024 [MB] (16 MBps) [2024-11-20T13:52:32.931Z] Copying: 522/1024 [MB] (15 MBps) [2024-11-20T13:52:33.867Z] Copying: 539/1024 [MB] (16 MBps) [2024-11-20T13:52:34.804Z] Copying: 556/1024 [MB] (17 MBps) [2024-11-20T13:52:35.740Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-20T13:52:37.115Z] Copying: 589/1024 [MB] (16 MBps) [2024-11-20T13:52:38.052Z] Copying: 606/1024 [MB] (16 MBps) [2024-11-20T13:52:38.986Z] Copying: 622/1024 [MB] (16 MBps) [2024-11-20T13:52:39.922Z] Copying: 638/1024 [MB] (16 MBps) [2024-11-20T13:52:40.882Z] Copying: 654/1024 [MB] (15 MBps) [2024-11-20T13:52:41.817Z] Copying: 670/1024 [MB] (16 MBps) [2024-11-20T13:52:42.754Z] Copying: 686/1024 [MB] (16 MBps) [2024-11-20T13:52:44.129Z] Copying: 703/1024 [MB] (16 MBps) [2024-11-20T13:52:45.064Z] Copying: 719/1024 [MB] (16 MBps) [2024-11-20T13:52:46.000Z] Copying: 734/1024 [MB] (15 MBps) [2024-11-20T13:52:46.942Z] Copying: 751/1024 [MB] (16 MBps) [2024-11-20T13:52:47.878Z] Copying: 768/1024 [MB] (16 MBps) [2024-11-20T13:52:48.814Z] Copying: 785/1024 [MB] (16 MBps) [2024-11-20T13:52:49.752Z] Copying: 802/1024 [MB] (16 MBps) [2024-11-20T13:52:51.129Z] Copying: 818/1024 [MB] (16 MBps) [2024-11-20T13:52:52.065Z] Copying: 834/1024 [MB] (16 MBps) [2024-11-20T13:52:53.000Z] Copying: 851/1024 [MB] (16 MBps) [2024-11-20T13:52:53.977Z] Copying: 867/1024 [MB] (16 MBps) [2024-11-20T13:52:54.912Z] Copying: 885/1024 [MB] (17 MBps) [2024-11-20T13:52:55.847Z] Copying: 901/1024 [MB] (15 MBps) [2024-11-20T13:52:56.783Z] Copying: 917/1024 [MB] (15 MBps) [2024-11-20T13:52:58.158Z] Copying: 933/1024 [MB] (15 MBps) [2024-11-20T13:52:59.094Z] Copying: 949/1024 [MB] (16 MBps) [2024-11-20T13:53:00.025Z] Copying: 965/1024 [MB] (16 MBps) [2024-11-20T13:53:00.961Z] Copying: 982/1024 [MB] (16 MBps) [2024-11-20T13:53:01.897Z] Copying: 998/1024 [MB] (16 MBps) [2024-11-20T13:53:02.468Z] Copying: 1015/1024 [MB] (16 MBps) [2024-11-20T13:53:03.404Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:33:11.365 00:33:11.365 13:53:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:33:11.365 13:53:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:33:11.623 13:53:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:12.192 [2024-11-20 13:53:03.931200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.931269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:12.192 [2024-11-20 13:53:03.931292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:12.192 [2024-11-20 13:53:03.931310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:03.931351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:12.192 [2024-11-20 13:53:03.934732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.934778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:12.192 [2024-11-20 13:53:03.934799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:33:12.192 [2024-11-20 13:53:03.934813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:03.936253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.936297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:12.192 [2024-11-20 13:53:03.936319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:33:12.192 [2024-11-20 13:53:03.936333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:03.951994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.952044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:12.192 [2024-11-20 13:53:03.952067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.627 ms 00:33:12.192 [2024-11-20 13:53:03.952081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:03.958804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.958844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:12.192 [2024-11-20 13:53:03.958865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:33:12.192 [2024-11-20 13:53:03.958895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:03.990387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:03.990446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:12.192 [2024-11-20 13:53:03.990469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.380 ms 00:33:12.192 [2024-11-20 13:53:03.990483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.009196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.009410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:12.192 [2024-11-20 13:53:04.009448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.650 ms 00:33:12.192 [2024-11-20 13:53:04.009477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.009717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.009743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:12.192 [2024-11-20 13:53:04.009761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:33:12.192 [2024-11-20 13:53:04.009775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.041586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.041642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:12.192 [2024-11-20 13:53:04.041677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.776 ms 00:33:12.192 [2024-11-20 13:53:04.041692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.073102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.073156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:12.192 [2024-11-20 13:53:04.073178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.339 ms 00:33:12.192 [2024-11-20 13:53:04.073193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.104114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.104311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:12.192 [2024-11-20 13:53:04.104349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.853 ms 00:33:12.192 [2024-11-20 13:53:04.104364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.135549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.192 [2024-11-20 13:53:04.135731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:12.192 [2024-11-20 13:53:04.135767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.045 ms 00:33:12.192 [2024-11-20 13:53:04.135783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.192 [2024-11-20 13:53:04.135840] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:12.192 [2024-11-20 13:53:04.135894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.135995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:12.192 [2024-11-20 13:53:04.136267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.136983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:12.193 [2024-11-20 13:53:04.137381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:12.194 [2024-11-20 13:53:04.137522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:12.194 [2024-11-20 13:53:04.137540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f1fd82f-8fcb-4345-b093-a1e5a769d63e 00:33:12.194 [2024-11-20 13:53:04.137554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:12.194 [2024-11-20 13:53:04.137572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:12.194 [2024-11-20 13:53:04.137584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:12.194 [2024-11-20 13:53:04.137603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:12.194 [2024-11-20 13:53:04.137616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:12.194 [2024-11-20 13:53:04.137632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:12.194 [2024-11-20 13:53:04.137645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:12.194 [2024-11-20 13:53:04.137659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:12.194 [2024-11-20 13:53:04.137671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:12.194 [2024-11-20 13:53:04.137687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.194 [2024-11-20 13:53:04.137710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:12.194 [2024-11-20 13:53:04.137727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:33:12.194 [2024-11-20 13:53:04.137741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.154790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.194 [2024-11-20 13:53:04.154988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:12.194 [2024-11-20 13:53:04.155025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:33:12.194 [2024-11-20 13:53:04.155040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.155493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.194 [2024-11-20 13:53:04.155521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:12.194 [2024-11-20 13:53:04.155540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:33:12.194 [2024-11-20 13:53:04.155554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.213479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.194 [2024-11-20 13:53:04.213703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:12.194 [2024-11-20 13:53:04.213741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.194 [2024-11-20 13:53:04.213756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.213854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.194 [2024-11-20 13:53:04.213891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:12.194 [2024-11-20 13:53:04.213910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.194 [2024-11-20 13:53:04.213923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.214060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.194 [2024-11-20 13:53:04.214092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:12.194 [2024-11-20 13:53:04.214121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.194 [2024-11-20 13:53:04.214136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.194 [2024-11-20 13:53:04.214183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.194 [2024-11-20 13:53:04.214198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:12.194 [2024-11-20 13:53:04.214213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.194 [2024-11-20 13:53:04.214226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.322388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.322466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:12.453 [2024-11-20 13:53:04.322489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.453 [2024-11-20 13:53:04.322504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.408065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.408319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:12.453 [2024-11-20 13:53:04.408358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.453 [2024-11-20 13:53:04.408374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.408527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.408548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:12.453 [2024-11-20 13:53:04.408564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.453 [2024-11-20 13:53:04.408581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.408656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.408675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:12.453 [2024-11-20 13:53:04.408691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.453 [2024-11-20 13:53:04.408704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.408838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.408858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:12.453 [2024-11-20 13:53:04.408905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.453 [2024-11-20 13:53:04.408924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.453 [2024-11-20 13:53:04.408987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.453 [2024-11-20 13:53:04.409012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:12.454 [2024-11-20 13:53:04.409028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.454 [2024-11-20 13:53:04.409040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.454 [2024-11-20 13:53:04.409093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.454 [2024-11-20 13:53:04.409108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:12.454 [2024-11-20 13:53:04.409124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.454 [2024-11-20 13:53:04.409136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.454 [2024-11-20 13:53:04.409199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.454 [2024-11-20 13:53:04.409217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:12.454 [2024-11-20 13:53:04.409232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.454 [2024-11-20 13:53:04.409244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.454 [2024-11-20 13:53:04.409434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.179 ms, result 0 00:33:12.454 true 00:33:12.454 13:53:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81226 00:33:12.454 13:53:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81226 00:33:12.454 13:53:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:33:12.713 [2024-11-20 13:53:04.553128] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:33:12.713 [2024-11-20 13:53:04.553306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82118 ] 00:33:12.713 [2024-11-20 13:53:04.742975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.972 [2024-11-20 13:53:04.850013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.353  [2024-11-20T13:53:07.368Z] Copying: 157/1024 [MB] (157 MBps) [2024-11-20T13:53:08.303Z] Copying: 322/1024 [MB] (164 MBps) [2024-11-20T13:53:09.238Z] Copying: 485/1024 [MB] (163 MBps) [2024-11-20T13:53:10.173Z] Copying: 651/1024 [MB] (165 MBps) [2024-11-20T13:53:11.548Z] Copying: 819/1024 [MB] (168 MBps) [2024-11-20T13:53:11.548Z] Copying: 987/1024 [MB] (167 MBps) [2024-11-20T13:53:12.501Z] Copying: 1024/1024 [MB] (average 164 MBps) 00:33:20.462 00:33:20.462 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81226 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:33:20.462 13:53:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:20.462 [2024-11-20 13:53:12.429203] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:33:20.462 [2024-11-20 13:53:12.429353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82194 ] 00:33:20.730 [2024-11-20 13:53:12.625847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.730 [2024-11-20 13:53:12.764282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.296 [2024-11-20 13:53:13.112185] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:21.296 [2024-11-20 13:53:13.112474] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:21.296 [2024-11-20 13:53:13.178987] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:21.296 [2024-11-20 13:53:13.179539] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:21.296 [2024-11-20 13:53:13.179750] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:21.555 [2024-11-20 13:53:13.393288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.393349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:21.555 [2024-11-20 13:53:13.393369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:21.555 [2024-11-20 13:53:13.393382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.393455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.393473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:21.555 [2024-11-20 13:53:13.393486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:21.555 [2024-11-20 13:53:13.393496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.393528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:21.555 [2024-11-20 13:53:13.394466] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:21.555 [2024-11-20 13:53:13.394652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.394673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:21.555 [2024-11-20 13:53:13.394687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:33:21.555 [2024-11-20 13:53:13.394698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.395831] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:21.555 [2024-11-20 13:53:13.411944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.412001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:21.555 [2024-11-20 13:53:13.412020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.112 ms 00:33:21.555 [2024-11-20 13:53:13.412032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.412110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.412130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:21.555 [2024-11-20 13:53:13.412143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:21.555 [2024-11-20 13:53:13.412154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.416665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.416734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:21.555 [2024-11-20 13:53:13.416753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:33:21.555 [2024-11-20 13:53:13.416765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.416906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.416938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:21.555 [2024-11-20 13:53:13.416952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:21.555 [2024-11-20 13:53:13.416963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.417041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.417059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:21.555 [2024-11-20 13:53:13.417072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:21.555 [2024-11-20 13:53:13.417083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.417118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:21.555 [2024-11-20 13:53:13.421509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.421702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:21.555 [2024-11-20 13:53:13.421738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:33:21.555 [2024-11-20 13:53:13.421757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.421804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.421819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:21.555 [2024-11-20 13:53:13.421832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:21.555 [2024-11-20 13:53:13.421843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.421937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:21.555 [2024-11-20 13:53:13.421972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:21.555 [2024-11-20 13:53:13.422017] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:21.555 [2024-11-20 13:53:13.422037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:21.555 [2024-11-20 13:53:13.422154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:21.555 [2024-11-20 13:53:13.422172] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:21.555 [2024-11-20 13:53:13.422187] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:21.555 [2024-11-20 13:53:13.422202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:21.555 [2024-11-20 13:53:13.422220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:21.555 [2024-11-20 13:53:13.422233] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:21.555 [2024-11-20 13:53:13.422244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:21.555 [2024-11-20 13:53:13.422254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:21.555 [2024-11-20 13:53:13.422265] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:21.555 [2024-11-20 13:53:13.422277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.422288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:21.555 [2024-11-20 13:53:13.422300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:33:21.555 [2024-11-20 13:53:13.422311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.422418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.555 [2024-11-20 13:53:13.422441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:21.555 [2024-11-20 13:53:13.422453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:21.555 [2024-11-20 13:53:13.422465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.555 [2024-11-20 13:53:13.422623] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:21.555 [2024-11-20 13:53:13.422648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:21.555 [2024-11-20 13:53:13.422661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:21.555 [2024-11-20 13:53:13.422672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.555 [2024-11-20 13:53:13.422683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:21.555 [2024-11-20 13:53:13.422694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:21.555 [2024-11-20 13:53:13.422704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:21.555 [2024-11-20 13:53:13.422714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:21.555 [2024-11-20 13:53:13.422726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:21.555 [2024-11-20 13:53:13.422736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:21.556 [2024-11-20 13:53:13.422747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:21.556 [2024-11-20 13:53:13.422787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:21.556 [2024-11-20 13:53:13.422797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:21.556 [2024-11-20 13:53:13.422808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:21.556 [2024-11-20 13:53:13.422819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:21.556 [2024-11-20 13:53:13.422829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.556 [2024-11-20 13:53:13.422839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:21.556 [2024-11-20 13:53:13.422849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:21.556 [2024-11-20 13:53:13.422859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.556 [2024-11-20 13:53:13.422894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:21.556 [2024-11-20 13:53:13.422907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:21.556 [2024-11-20 13:53:13.422920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:21.556 [2024-11-20 13:53:13.422939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:21.556 [2024-11-20 13:53:13.422955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:21.556 [2024-11-20 13:53:13.422966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:21.556 [2024-11-20 13:53:13.422977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:21.556 [2024-11-20 13:53:13.422987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:21.556 [2024-11-20 13:53:13.422997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:21.556 [2024-11-20 13:53:13.423007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:21.556 [2024-11-20 13:53:13.423017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:21.556 [2024-11-20 13:53:13.423026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:21.556 [2024-11-20 13:53:13.423036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:21.556 [2024-11-20 13:53:13.423047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:21.556 [2024-11-20 13:53:13.423056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:21.556 [2024-11-20 13:53:13.423066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:21.556 [2024-11-20 13:53:13.423076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:21.556 [2024-11-20 13:53:13.423086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:21.556 [2024-11-20 13:53:13.423096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:21.556 [2024-11-20 13:53:13.423106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:21.556 [2024-11-20 13:53:13.423116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.556 [2024-11-20 13:53:13.423126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:21.556 [2024-11-20 13:53:13.423136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:21.556 [2024-11-20 13:53:13.423145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.556 [2024-11-20 13:53:13.423157] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:21.556 [2024-11-20 13:53:13.423168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:21.556 [2024-11-20 13:53:13.423179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:21.556 [2024-11-20 13:53:13.423196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.556 [2024-11-20 13:53:13.423207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:21.556 [2024-11-20 13:53:13.423217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:21.556 [2024-11-20 13:53:13.423227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:21.556 [2024-11-20 13:53:13.423238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:21.556 [2024-11-20 13:53:13.423247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:21.556 [2024-11-20 13:53:13.423258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:21.556 [2024-11-20 13:53:13.423270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:21.556 [2024-11-20 13:53:13.423283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:21.556 [2024-11-20 13:53:13.423307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:21.556 [2024-11-20 13:53:13.423319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:21.556 [2024-11-20 13:53:13.423330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:21.556 [2024-11-20 13:53:13.423341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:21.556 [2024-11-20 13:53:13.423352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:21.556 [2024-11-20 13:53:13.423363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:21.556 [2024-11-20 13:53:13.423375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:21.556 [2024-11-20 13:53:13.423385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:21.556 [2024-11-20 13:53:13.423396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:21.556 [2024-11-20 13:53:13.423452] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:21.556 [2024-11-20 13:53:13.423464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:21.556 [2024-11-20 13:53:13.423489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:21.556 [2024-11-20 13:53:13.423500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:21.556 [2024-11-20 13:53:13.423511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:21.556 [2024-11-20 13:53:13.423524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.423537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:21.556 [2024-11-20 13:53:13.423548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:33:21.556 [2024-11-20 13:53:13.423559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.456464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.456764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:21.556 [2024-11-20 13:53:13.456795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.838 ms 00:33:21.556 [2024-11-20 13:53:13.456808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.456959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.456986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:21.556 [2024-11-20 13:53:13.456999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:21.556 [2024-11-20 13:53:13.457010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.518463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.518529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:21.556 [2024-11-20 13:53:13.518555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.353 ms 00:33:21.556 [2024-11-20 13:53:13.518567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.518647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.518665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:21.556 [2024-11-20 13:53:13.518679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:21.556 [2024-11-20 13:53:13.518691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.519164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.519200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:21.556 [2024-11-20 13:53:13.519214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:33:21.556 [2024-11-20 13:53:13.519226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.519395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.519416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:21.556 [2024-11-20 13:53:13.519428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:33:21.556 [2024-11-20 13:53:13.519439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.536344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.556 [2024-11-20 13:53:13.536404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:21.556 [2024-11-20 13:53:13.536423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.875 ms 00:33:21.556 [2024-11-20 13:53:13.536435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.556 [2024-11-20 13:53:13.552973] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:21.556 [2024-11-20 13:53:13.553182] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:21.557 [2024-11-20 13:53:13.553208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.557 [2024-11-20 13:53:13.553221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:21.557 [2024-11-20 13:53:13.553237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.593 ms 00:33:21.557 [2024-11-20 13:53:13.553248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.557 [2024-11-20 13:53:13.583449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.557 [2024-11-20 13:53:13.583532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:21.557 [2024-11-20 13:53:13.583577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.139 ms 00:33:21.557 [2024-11-20 13:53:13.583590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.600096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.600171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:21.815 [2024-11-20 13:53:13.600192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.398 ms 00:33:21.815 [2024-11-20 13:53:13.600203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.616171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.616247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:21.815 [2024-11-20 13:53:13.616267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.890 ms 00:33:21.815 [2024-11-20 13:53:13.616279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.617198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.617236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:21.815 [2024-11-20 13:53:13.617252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:33:21.815 [2024-11-20 13:53:13.617264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.691529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.691808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:21.815 [2024-11-20 13:53:13.691842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.229 ms 00:33:21.815 [2024-11-20 13:53:13.691857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.705054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:21.815 [2024-11-20 13:53:13.707892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.707932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:21.815 [2024-11-20 13:53:13.707952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.926 ms 00:33:21.815 [2024-11-20 13:53:13.707965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.708111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.708133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:21.815 [2024-11-20 13:53:13.708147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:21.815 [2024-11-20 13:53:13.708159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.708258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.708279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:21.815 [2024-11-20 13:53:13.708292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:21.815 [2024-11-20 13:53:13.708303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.708337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.708359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:21.815 [2024-11-20 13:53:13.708371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:21.815 [2024-11-20 13:53:13.708382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.708425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:21.815 [2024-11-20 13:53:13.708443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.708454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:21.815 [2024-11-20 13:53:13.708465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:21.815 [2024-11-20 13:53:13.708476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.741159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.741237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:21.815 [2024-11-20 13:53:13.741259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.647 ms 00:33:21.815 [2024-11-20 13:53:13.741271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.741414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.815 [2024-11-20 13:53:13.741435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:21.815 [2024-11-20 13:53:13.741449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:21.815 [2024-11-20 13:53:13.741460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.815 [2024-11-20 13:53:13.742932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.070 ms, result 0 00:33:22.749  [2024-11-20T13:53:16.162Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-20T13:53:17.096Z] Copying: 61/1024 [MB] (31 MBps) [2024-11-20T13:53:18.033Z] Copying: 92/1024 [MB] (30 MBps) [2024-11-20T13:53:18.974Z] Copying: 123/1024 [MB] (31 MBps) [2024-11-20T13:53:19.907Z] Copying: 153/1024 [MB] (29 MBps) [2024-11-20T13:53:20.842Z] Copying: 182/1024 [MB] (29 MBps) [2024-11-20T13:53:21.775Z] Copying: 210/1024 [MB] (28 MBps) [2024-11-20T13:53:23.150Z] Copying: 238/1024 [MB] (27 MBps) [2024-11-20T13:53:24.084Z] Copying: 266/1024 [MB] (28 MBps) [2024-11-20T13:53:25.053Z] Copying: 294/1024 [MB] (27 MBps) [2024-11-20T13:53:26.005Z] Copying: 324/1024 [MB] (29 MBps) [2024-11-20T13:53:26.943Z] Copying: 352/1024 [MB] (28 MBps) [2024-11-20T13:53:27.877Z] Copying: 380/1024 [MB] (27 MBps) [2024-11-20T13:53:28.813Z] Copying: 407/1024 [MB] (27 MBps) [2024-11-20T13:53:30.193Z] Copying: 436/1024 [MB] (29 MBps) [2024-11-20T13:53:30.760Z] Copying: 465/1024 [MB] (29 MBps) [2024-11-20T13:53:32.137Z] Copying: 493/1024 [MB] (27 MBps) [2024-11-20T13:53:33.073Z] Copying: 521/1024 [MB] (28 MBps) [2024-11-20T13:53:34.010Z] Copying: 549/1024 [MB] (28 MBps) [2024-11-20T13:53:34.946Z] Copying: 577/1024 [MB] (27 MBps) [2024-11-20T13:53:35.881Z] Copying: 603/1024 [MB] (26 MBps) [2024-11-20T13:53:36.817Z] Copying: 630/1024 [MB] (27 MBps) [2024-11-20T13:53:38.195Z] Copying: 658/1024 [MB] (27 MBps) [2024-11-20T13:53:38.789Z] Copying: 686/1024 [MB] (27 MBps) [2024-11-20T13:53:40.168Z] Copying: 713/1024 [MB] (27 MBps) [2024-11-20T13:53:41.103Z] Copying: 740/1024 [MB] (27 MBps) [2024-11-20T13:53:42.039Z] Copying: 769/1024 [MB] (28 MBps) [2024-11-20T13:53:42.975Z] Copying: 797/1024 [MB] (27 MBps) [2024-11-20T13:53:43.910Z] Copying: 825/1024 [MB] (28 MBps) [2024-11-20T13:53:44.847Z] Copying: 853/1024 [MB] (27 MBps) [2024-11-20T13:53:45.813Z] Copying: 881/1024 [MB] (28 MBps) [2024-11-20T13:53:47.190Z] Copying: 909/1024 [MB] (28 MBps) [2024-11-20T13:53:47.757Z] Copying: 937/1024 [MB] (27 MBps) [2024-11-20T13:53:49.133Z] Copying: 963/1024 [MB] (26 MBps) [2024-11-20T13:53:50.068Z] Copying: 990/1024 [MB] (27 MBps) [2024-11-20T13:53:51.004Z] Copying: 1017/1024 [MB] (27 MBps) [2024-11-20T13:53:51.263Z] Copying: 1048228/1048576 [kB] (5844 kBps) [2024-11-20T13:53:51.263Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:53:51.226775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.224 [2024-11-20 13:53:51.226851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:59.224 [2024-11-20 13:53:51.226885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:59.224 [2024-11-20 13:53:51.226900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.224 [2024-11-20 13:53:51.230609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:59.224 [2024-11-20 13:53:51.237583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.224 [2024-11-20 13:53:51.237791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:59.224 [2024-11-20 13:53:51.237821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:33:59.224 [2024-11-20 13:53:51.237834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.224 [2024-11-20 13:53:51.250319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.224 [2024-11-20 13:53:51.250362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:59.224 [2024-11-20 13:53:51.250382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:33:59.224 [2024-11-20 13:53:51.250392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.483 [2024-11-20 13:53:51.271644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.483 [2024-11-20 13:53:51.271825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:59.483 [2024-11-20 13:53:51.271855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.232 ms 00:33:59.483 [2024-11-20 13:53:51.271888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.483 [2024-11-20 13:53:51.278586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.483 [2024-11-20 13:53:51.278638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:59.483 [2024-11-20 13:53:51.278653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:33:59.483 [2024-11-20 13:53:51.278663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.483 [2024-11-20 13:53:51.311001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.483 [2024-11-20 13:53:51.311062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:59.483 [2024-11-20 13:53:51.311081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.222 ms 00:33:59.483 [2024-11-20 13:53:51.311093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.484 [2024-11-20 13:53:51.329552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.484 [2024-11-20 13:53:51.329598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:59.484 [2024-11-20 13:53:51.329633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.395 ms 00:33:59.484 [2024-11-20 13:53:51.329645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.484 [2024-11-20 13:53:51.442110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.484 [2024-11-20 13:53:51.442262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:59.484 [2024-11-20 13:53:51.442314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.393 ms 00:33:59.484 [2024-11-20 13:53:51.442326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.484 [2024-11-20 13:53:51.474877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.484 [2024-11-20 13:53:51.474939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:59.484 [2024-11-20 13:53:51.474959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.526 ms 00:33:59.484 [2024-11-20 13:53:51.474971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.484 [2024-11-20 13:53:51.506937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.484 [2024-11-20 13:53:51.507006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:59.484 [2024-11-20 13:53:51.507025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.915 ms 00:33:59.484 [2024-11-20 13:53:51.507036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.745 [2024-11-20 13:53:51.538653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.745 [2024-11-20 13:53:51.538710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:59.745 [2024-11-20 13:53:51.538731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.564 ms 00:33:59.745 [2024-11-20 13:53:51.538742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.745 [2024-11-20 13:53:51.569880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.745 [2024-11-20 13:53:51.570157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:59.745 [2024-11-20 13:53:51.570190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.031 ms 00:33:59.745 [2024-11-20 13:53:51.570203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.745 [2024-11-20 13:53:51.570257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:59.745 [2024-11-20 13:53:51.570282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:33:59.745 [2024-11-20 13:53:51.570296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:59.745 [2024-11-20 13:53:51.570772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.570993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:59.746 [2024-11-20 13:53:51.571503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:59.746 [2024-11-20 13:53:51.571515] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f1fd82f-8fcb-4345-b093-a1e5a769d63e 00:33:59.746 [2024-11-20 13:53:51.571527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:33:59.746 [2024-11-20 13:53:51.571546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:33:59.746 [2024-11-20 13:53:51.571570] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:33:59.746 [2024-11-20 13:53:51.571582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:33:59.746 [2024-11-20 13:53:51.571608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:59.746 [2024-11-20 13:53:51.571619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:59.746 [2024-11-20 13:53:51.571630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:59.746 [2024-11-20 13:53:51.571640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:59.746 [2024-11-20 13:53:51.571649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:59.746 [2024-11-20 13:53:51.571660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.746 [2024-11-20 13:53:51.571686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:59.746 [2024-11-20 13:53:51.571715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:33:59.746 [2024-11-20 13:53:51.571726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.588632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.746 [2024-11-20 13:53:51.588709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:59.746 [2024-11-20 13:53:51.588737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.857 ms 00:33:59.746 [2024-11-20 13:53:51.588749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.589212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.746 [2024-11-20 13:53:51.589407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:59.746 [2024-11-20 13:53:51.589434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:33:59.746 [2024-11-20 13:53:51.589455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.633061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:59.746 [2024-11-20 13:53:51.633146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:59.746 [2024-11-20 13:53:51.633181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:59.746 [2024-11-20 13:53:51.633193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.633275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:59.746 [2024-11-20 13:53:51.633292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:59.746 [2024-11-20 13:53:51.633305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:59.746 [2024-11-20 13:53:51.633324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.633425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:59.746 [2024-11-20 13:53:51.633446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:59.746 [2024-11-20 13:53:51.633458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:59.746 [2024-11-20 13:53:51.633470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.746 [2024-11-20 13:53:51.633493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:59.746 [2024-11-20 13:53:51.633508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:59.746 [2024-11-20 13:53:51.633519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:59.747 [2024-11-20 13:53:51.633531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.747 [2024-11-20 13:53:51.738747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:59.747 [2024-11-20 13:53:51.738854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:59.747 [2024-11-20 13:53:51.738892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:59.747 [2024-11-20 13:53:51.738906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.823122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.823452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:00.006 [2024-11-20 13:53:51.823484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.006 [2024-11-20 13:53:51.823496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.823618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.823636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:00.006 [2024-11-20 13:53:51.823649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.006 [2024-11-20 13:53:51.823659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.823726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.823743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:00.006 [2024-11-20 13:53:51.823755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.006 [2024-11-20 13:53:51.823766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.823930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.823953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:00.006 [2024-11-20 13:53:51.823966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.006 [2024-11-20 13:53:51.823977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.824031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.824051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:00.006 [2024-11-20 13:53:51.824080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.006 [2024-11-20 13:53:51.824102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.006 [2024-11-20 13:53:51.824152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.006 [2024-11-20 13:53:51.824176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:00.007 [2024-11-20 13:53:51.824188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.007 [2024-11-20 13:53:51.824198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.007 [2024-11-20 13:53:51.824266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.007 [2024-11-20 13:53:51.824282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:00.007 [2024-11-20 13:53:51.824324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.007 [2024-11-20 13:53:51.824335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.007 [2024-11-20 13:53:51.824486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 599.983 ms, result 0 00:34:01.414 00:34:01.414 00:34:01.414 13:53:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:03.941 13:53:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:03.941 [2024-11-20 13:53:55.613859] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:34:03.941 [2024-11-20 13:53:55.614050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82615 ] 00:34:03.941 [2024-11-20 13:53:55.798917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.941 [2024-11-20 13:53:55.929472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.510 [2024-11-20 13:53:56.273999] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:04.510 [2024-11-20 13:53:56.274077] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:04.510 [2024-11-20 13:53:56.437256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.437324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:04.510 [2024-11-20 13:53:56.437352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:04.510 [2024-11-20 13:53:56.437365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.510 [2024-11-20 13:53:56.437436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.437456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:04.510 [2024-11-20 13:53:56.437474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:04.510 [2024-11-20 13:53:56.437487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.510 [2024-11-20 13:53:56.437519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:04.510 [2024-11-20 13:53:56.438500] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:04.510 [2024-11-20 13:53:56.438546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.438562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:04.510 [2024-11-20 13:53:56.438575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:34:04.510 [2024-11-20 13:53:56.438588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.510 [2024-11-20 13:53:56.439804] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:04.510 [2024-11-20 13:53:56.456756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.456809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:04.510 [2024-11-20 13:53:56.456833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.952 ms 00:34:04.510 [2024-11-20 13:53:56.456846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.510 [2024-11-20 13:53:56.456958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.456982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:04.510 [2024-11-20 13:53:56.456996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:04.510 [2024-11-20 13:53:56.457008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.510 [2024-11-20 13:53:56.461767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.510 [2024-11-20 13:53:56.461823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:04.511 [2024-11-20 13:53:56.461840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:34:04.511 [2024-11-20 13:53:56.461861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.462006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.462028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:04.511 [2024-11-20 13:53:56.462051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:04.511 [2024-11-20 13:53:56.462073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.462145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.462163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:04.511 [2024-11-20 13:53:56.462177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:04.511 [2024-11-20 13:53:56.462188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.462247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:04.511 [2024-11-20 13:53:56.466638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.466686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:04.511 [2024-11-20 13:53:56.466705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:34:04.511 [2024-11-20 13:53:56.466723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.466779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.466800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:04.511 [2024-11-20 13:53:56.466814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:04.511 [2024-11-20 13:53:56.466826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.466906] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:04.511 [2024-11-20 13:53:56.466942] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:04.511 [2024-11-20 13:53:56.466986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:04.511 [2024-11-20 13:53:56.467013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:04.511 [2024-11-20 13:53:56.467137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:04.511 [2024-11-20 13:53:56.467156] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:04.511 [2024-11-20 13:53:56.467171] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:04.511 [2024-11-20 13:53:56.467186] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467200] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:04.511 [2024-11-20 13:53:56.467225] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:04.511 [2024-11-20 13:53:56.467236] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:04.511 [2024-11-20 13:53:56.467253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:04.511 [2024-11-20 13:53:56.467266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.467279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:04.511 [2024-11-20 13:53:56.467292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:34:04.511 [2024-11-20 13:53:56.467304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.467408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.511 [2024-11-20 13:53:56.467425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:04.511 [2024-11-20 13:53:56.467438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:04.511 [2024-11-20 13:53:56.467450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.511 [2024-11-20 13:53:56.467606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:04.511 [2024-11-20 13:53:56.467636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:04.511 [2024-11-20 13:53:56.467649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:04.511 [2024-11-20 13:53:56.467685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:04.511 [2024-11-20 13:53:56.467718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:04.511 [2024-11-20 13:53:56.467744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:04.511 [2024-11-20 13:53:56.467765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:04.511 [2024-11-20 13:53:56.467779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:04.511 [2024-11-20 13:53:56.467790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:04.511 [2024-11-20 13:53:56.467802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:04.511 [2024-11-20 13:53:56.467827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:04.511 [2024-11-20 13:53:56.467852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:04.511 [2024-11-20 13:53:56.467906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:04.511 [2024-11-20 13:53:56.467939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:04.511 [2024-11-20 13:53:56.467971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:04.511 [2024-11-20 13:53:56.467982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:04.511 [2024-11-20 13:53:56.467992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:04.511 [2024-11-20 13:53:56.468003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:04.511 [2024-11-20 13:53:56.468014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:04.511 [2024-11-20 13:53:56.468025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:04.511 [2024-11-20 13:53:56.468036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:04.511 [2024-11-20 13:53:56.468047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:04.511 [2024-11-20 13:53:56.468058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:04.511 [2024-11-20 13:53:56.468069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:04.511 [2024-11-20 13:53:56.468080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:04.511 [2024-11-20 13:53:56.468091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:04.511 [2024-11-20 13:53:56.468102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:04.511 [2024-11-20 13:53:56.468112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.468123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:04.511 [2024-11-20 13:53:56.468134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:04.511 [2024-11-20 13:53:56.468145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.468155] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:04.511 [2024-11-20 13:53:56.468167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:04.511 [2024-11-20 13:53:56.468179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:04.511 [2024-11-20 13:53:56.468190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:04.511 [2024-11-20 13:53:56.468203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:04.511 [2024-11-20 13:53:56.468214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:04.511 [2024-11-20 13:53:56.468225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:04.511 [2024-11-20 13:53:56.468236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:04.511 [2024-11-20 13:53:56.468247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:04.512 [2024-11-20 13:53:56.468258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:04.512 [2024-11-20 13:53:56.468271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:04.512 [2024-11-20 13:53:56.468286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:04.512 [2024-11-20 13:53:56.468312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:04.512 [2024-11-20 13:53:56.468324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:04.512 [2024-11-20 13:53:56.468336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:04.512 [2024-11-20 13:53:56.468347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:04.512 [2024-11-20 13:53:56.468359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:04.512 [2024-11-20 13:53:56.468371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:04.512 [2024-11-20 13:53:56.468383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:04.512 [2024-11-20 13:53:56.468394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:04.512 [2024-11-20 13:53:56.468407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:04.512 [2024-11-20 13:53:56.468467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:04.512 [2024-11-20 13:53:56.468487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:04.512 [2024-11-20 13:53:56.468527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:04.512 [2024-11-20 13:53:56.468540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:04.512 [2024-11-20 13:53:56.468552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:04.512 [2024-11-20 13:53:56.468565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.512 [2024-11-20 13:53:56.468577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:04.512 [2024-11-20 13:53:56.468590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:34:04.512 [2024-11-20 13:53:56.468601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.512 [2024-11-20 13:53:56.502937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.512 [2024-11-20 13:53:56.503157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:04.512 [2024-11-20 13:53:56.503191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.269 ms 00:34:04.512 [2024-11-20 13:53:56.503205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.512 [2024-11-20 13:53:56.503343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.512 [2024-11-20 13:53:56.503362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:04.512 [2024-11-20 13:53:56.503376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:04.512 [2024-11-20 13:53:56.503388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.585761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.585853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:04.772 [2024-11-20 13:53:56.585920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.269 ms 00:34:04.772 [2024-11-20 13:53:56.585948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.586062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.586094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:04.772 [2024-11-20 13:53:56.586132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:04.772 [2024-11-20 13:53:56.586157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.586714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.586982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:04.772 [2024-11-20 13:53:56.587031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:34:04.772 [2024-11-20 13:53:56.587059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.587343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.587397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:04.772 [2024-11-20 13:53:56.587425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:34:04.772 [2024-11-20 13:53:56.587463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.614606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.615063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:04.772 [2024-11-20 13:53:56.615152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.084 ms 00:34:04.772 [2024-11-20 13:53:56.615182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.638085] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:04.772 [2024-11-20 13:53:56.638140] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:04.772 [2024-11-20 13:53:56.638163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.638176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:04.772 [2024-11-20 13:53:56.638191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.637 ms 00:34:04.772 [2024-11-20 13:53:56.638203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.669728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.669782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:04.772 [2024-11-20 13:53:56.669802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.466 ms 00:34:04.772 [2024-11-20 13:53:56.669814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.686401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.686461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:04.772 [2024-11-20 13:53:56.686480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.450 ms 00:34:04.772 [2024-11-20 13:53:56.686493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.702922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.702970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:04.772 [2024-11-20 13:53:56.702989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:34:04.772 [2024-11-20 13:53:56.703001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.703851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.703914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:04.772 [2024-11-20 13:53:56.703933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:34:04.772 [2024-11-20 13:53:56.703950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.782069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.782143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:04.772 [2024-11-20 13:53:56.782172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.087 ms 00:34:04.772 [2024-11-20 13:53:56.782185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.795674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:04.772 [2024-11-20 13:53:56.798739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.798799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:04.772 [2024-11-20 13:53:56.798820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.463 ms 00:34:04.772 [2024-11-20 13:53:56.798832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.798978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.799002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:04.772 [2024-11-20 13:53:56.799017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:04.772 [2024-11-20 13:53:56.799033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.800765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.800836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:04.772 [2024-11-20 13:53:56.800869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:34:04.772 [2024-11-20 13:53:56.800906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.800949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.800965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:04.772 [2024-11-20 13:53:56.800978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:04.772 [2024-11-20 13:53:56.800990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.772 [2024-11-20 13:53:56.801040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:04.772 [2024-11-20 13:53:56.801057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.772 [2024-11-20 13:53:56.801069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:04.773 [2024-11-20 13:53:56.801082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:34:04.773 [2024-11-20 13:53:56.801094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.032 [2024-11-20 13:53:56.834513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.032 [2024-11-20 13:53:56.834589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:05.032 [2024-11-20 13:53:56.834608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.393 ms 00:34:05.032 [2024-11-20 13:53:56.834643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.032 [2024-11-20 13:53:56.834745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.032 [2024-11-20 13:53:56.834776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:05.032 [2024-11-20 13:53:56.834792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:05.032 [2024-11-20 13:53:56.834804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.032 [2024-11-20 13:53:56.835947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.162 ms, result 0 00:34:06.409  [2024-11-20T13:53:59.384Z] Copying: 768/1048576 [kB] (768 kBps) [2024-11-20T13:54:00.318Z] Copying: 3508/1048576 [kB] (2740 kBps) [2024-11-20T13:54:01.253Z] Copying: 19/1024 [MB] (16 MBps) [2024-11-20T13:54:02.191Z] Copying: 49/1024 [MB] (30 MBps) [2024-11-20T13:54:03.125Z] Copying: 80/1024 [MB] (30 MBps) [2024-11-20T13:54:04.501Z] Copying: 110/1024 [MB] (29 MBps) [2024-11-20T13:54:05.436Z] Copying: 140/1024 [MB] (30 MBps) [2024-11-20T13:54:06.373Z] Copying: 167/1024 [MB] (27 MBps) [2024-11-20T13:54:07.311Z] Copying: 195/1024 [MB] (28 MBps) [2024-11-20T13:54:08.247Z] Copying: 224/1024 [MB] (28 MBps) [2024-11-20T13:54:09.183Z] Copying: 253/1024 [MB] (29 MBps) [2024-11-20T13:54:10.155Z] Copying: 283/1024 [MB] (29 MBps) [2024-11-20T13:54:11.092Z] Copying: 313/1024 [MB] (30 MBps) [2024-11-20T13:54:12.470Z] Copying: 344/1024 [MB] (30 MBps) [2024-11-20T13:54:13.406Z] Copying: 375/1024 [MB] (30 MBps) [2024-11-20T13:54:14.416Z] Copying: 406/1024 [MB] (31 MBps) [2024-11-20T13:54:15.353Z] Copying: 436/1024 [MB] (30 MBps) [2024-11-20T13:54:16.291Z] Copying: 467/1024 [MB] (30 MBps) [2024-11-20T13:54:17.227Z] Copying: 497/1024 [MB] (30 MBps) [2024-11-20T13:54:18.163Z] Copying: 528/1024 [MB] (30 MBps) [2024-11-20T13:54:19.101Z] Copying: 558/1024 [MB] (30 MBps) [2024-11-20T13:54:20.488Z] Copying: 588/1024 [MB] (30 MBps) [2024-11-20T13:54:21.424Z] Copying: 618/1024 [MB] (29 MBps) [2024-11-20T13:54:22.392Z] Copying: 647/1024 [MB] (29 MBps) [2024-11-20T13:54:23.328Z] Copying: 677/1024 [MB] (29 MBps) [2024-11-20T13:54:24.263Z] Copying: 706/1024 [MB] (29 MBps) [2024-11-20T13:54:25.229Z] Copying: 736/1024 [MB] (30 MBps) [2024-11-20T13:54:26.164Z] Copying: 766/1024 [MB] (29 MBps) [2024-11-20T13:54:27.097Z] Copying: 796/1024 [MB] (30 MBps) [2024-11-20T13:54:28.473Z] Copying: 826/1024 [MB] (30 MBps) [2024-11-20T13:54:29.409Z] Copying: 855/1024 [MB] (28 MBps) [2024-11-20T13:54:30.344Z] Copying: 884/1024 [MB] (28 MBps) [2024-11-20T13:54:31.281Z] Copying: 913/1024 [MB] (29 MBps) [2024-11-20T13:54:32.215Z] Copying: 943/1024 [MB] (29 MBps) [2024-11-20T13:54:33.150Z] Copying: 973/1024 [MB] (29 MBps) [2024-11-20T13:54:34.084Z] Copying: 1002/1024 [MB] (29 MBps) [2024-11-20T13:54:35.020Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:54:34.656553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.981 [2024-11-20 13:54:34.656693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:42.981 [2024-11-20 13:54:34.656742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:42.981 [2024-11-20 13:54:34.656767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.981 [2024-11-20 13:54:34.656826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.981 [2024-11-20 13:54:34.661196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.981 [2024-11-20 13:54:34.661233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:42.981 [2024-11-20 13:54:34.661250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:34:42.981 [2024-11-20 13:54:34.661262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.981 [2024-11-20 13:54:34.661529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.981 [2024-11-20 13:54:34.661548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:42.981 [2024-11-20 13:54:34.661567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:34:42.981 [2024-11-20 13:54:34.661578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.981 [2024-11-20 13:54:34.673638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.673691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:42.982 [2024-11-20 13:54:34.673715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.037 ms 00:34:42.982 [2024-11-20 13:54:34.673728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.680458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.680490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:42.982 [2024-11-20 13:54:34.680529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.689 ms 00:34:42.982 [2024-11-20 13:54:34.680541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.713148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.713208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:42.982 [2024-11-20 13:54:34.713244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.536 ms 00:34:42.982 [2024-11-20 13:54:34.713256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.731625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.731819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:42.982 [2024-11-20 13:54:34.731850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.319 ms 00:34:42.982 [2024-11-20 13:54:34.731883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.733400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.733438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:42.982 [2024-11-20 13:54:34.733455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:34:42.982 [2024-11-20 13:54:34.733467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.765797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.765841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:42.982 [2024-11-20 13:54:34.765876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.297 ms 00:34:42.982 [2024-11-20 13:54:34.765904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.797269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.797311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:42.982 [2024-11-20 13:54:34.797358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.317 ms 00:34:42.982 [2024-11-20 13:54:34.797370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.828663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.828739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:42.982 [2024-11-20 13:54:34.828772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.247 ms 00:34:42.982 [2024-11-20 13:54:34.828783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.859618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.982 [2024-11-20 13:54:34.859837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:42.982 [2024-11-20 13:54:34.859883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.707 ms 00:34:42.982 [2024-11-20 13:54:34.859900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.982 [2024-11-20 13:54:34.859959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:42.982 [2024-11-20 13:54:34.859985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:42.982 [2024-11-20 13:54:34.860000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:42.982 [2024-11-20 13:54:34.860013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:42.982 [2024-11-20 13:54:34.860651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.860998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:42.983 [2024-11-20 13:54:34.861277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:42.983 [2024-11-20 13:54:34.861288] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f1fd82f-8fcb-4345-b093-a1e5a769d63e 00:34:42.983 [2024-11-20 13:54:34.861300] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:42.983 [2024-11-20 13:54:34.861311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:34:42.983 [2024-11-20 13:54:34.861321] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:34:42.983 [2024-11-20 13:54:34.861338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:34:42.983 [2024-11-20 13:54:34.861349] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:42.983 [2024-11-20 13:54:34.861361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:42.983 [2024-11-20 13:54:34.861372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:42.983 [2024-11-20 13:54:34.861394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:42.983 [2024-11-20 13:54:34.861404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:42.983 [2024-11-20 13:54:34.861416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.983 [2024-11-20 13:54:34.861427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:42.983 [2024-11-20 13:54:34.861439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:34:42.983 [2024-11-20 13:54:34.861450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.878216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.983 [2024-11-20 13:54:34.878262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:42.983 [2024-11-20 13:54:34.878295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.723 ms 00:34:42.983 [2024-11-20 13:54:34.878306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.878727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.983 [2024-11-20 13:54:34.878749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:42.983 [2024-11-20 13:54:34.878762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:34:42.983 [2024-11-20 13:54:34.878781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.921652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.983 [2024-11-20 13:54:34.921733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.983 [2024-11-20 13:54:34.921768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.983 [2024-11-20 13:54:34.921781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.921848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.983 [2024-11-20 13:54:34.921863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:42.983 [2024-11-20 13:54:34.921876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.983 [2024-11-20 13:54:34.921923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.922020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.983 [2024-11-20 13:54:34.922041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:42.983 [2024-11-20 13:54:34.922055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.983 [2024-11-20 13:54:34.922066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.983 [2024-11-20 13:54:34.922089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.983 [2024-11-20 13:54:34.922103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:42.983 [2024-11-20 13:54:34.922121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.983 [2024-11-20 13:54:34.922132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.242 [2024-11-20 13:54:35.026254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.242 [2024-11-20 13:54:35.026339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:43.242 [2024-11-20 13:54:35.026376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.242 [2024-11-20 13:54:35.026388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.110528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.110595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:43.243 [2024-11-20 13:54:35.110630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.110642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.110768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.110818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:43.243 [2024-11-20 13:54:35.110832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.110843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.110921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.110941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:43.243 [2024-11-20 13:54:35.110954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.110966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.111089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.111110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:43.243 [2024-11-20 13:54:35.111130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.111142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.111199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.111218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:43.243 [2024-11-20 13:54:35.111230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.111242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.111286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.111302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:43.243 [2024-11-20 13:54:35.111315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.111333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.111383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.243 [2024-11-20 13:54:35.111400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:43.243 [2024-11-20 13:54:35.111413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.243 [2024-11-20 13:54:35.111424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.243 [2024-11-20 13:54:35.111569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.998 ms, result 0 00:34:44.179 00:34:44.179 00:34:44.179 13:54:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:46.713 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:46.713 13:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:46.713 [2024-11-20 13:54:38.346980] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:34:46.713 [2024-11-20 13:54:38.347155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83030 ] 00:34:46.713 [2024-11-20 13:54:38.529937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.713 [2024-11-20 13:54:38.635004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.971 [2024-11-20 13:54:38.953826] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:46.971 [2024-11-20 13:54:38.953933] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:47.231 [2024-11-20 13:54:39.116700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.116780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:47.231 [2024-11-20 13:54:39.116805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:47.231 [2024-11-20 13:54:39.116817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.116937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.116957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:47.231 [2024-11-20 13:54:39.116973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:34:47.231 [2024-11-20 13:54:39.116984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.117017] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:47.231 [2024-11-20 13:54:39.118010] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:47.231 [2024-11-20 13:54:39.118053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.118067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:47.231 [2024-11-20 13:54:39.118080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:34:47.231 [2024-11-20 13:54:39.118091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.119380] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:47.231 [2024-11-20 13:54:39.136416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.136456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:47.231 [2024-11-20 13:54:39.136489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.036 ms 00:34:47.231 [2024-11-20 13:54:39.136499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.136573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.136591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:47.231 [2024-11-20 13:54:39.136602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:47.231 [2024-11-20 13:54:39.136612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.141354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.141393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:47.231 [2024-11-20 13:54:39.141423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:34:47.231 [2024-11-20 13:54:39.141439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.141528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.141546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:47.231 [2024-11-20 13:54:39.141557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:47.231 [2024-11-20 13:54:39.141567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.141640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.141657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:47.231 [2024-11-20 13:54:39.141684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:47.231 [2024-11-20 13:54:39.141712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.141748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:47.231 [2024-11-20 13:54:39.145985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.146022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:47.231 [2024-11-20 13:54:39.146068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:34:47.231 [2024-11-20 13:54:39.146083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.231 [2024-11-20 13:54:39.146127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.231 [2024-11-20 13:54:39.146142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:47.231 [2024-11-20 13:54:39.146154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:47.232 [2024-11-20 13:54:39.146165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.232 [2024-11-20 13:54:39.146208] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:47.232 [2024-11-20 13:54:39.146237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:47.232 [2024-11-20 13:54:39.146293] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:47.232 [2024-11-20 13:54:39.146316] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:47.232 [2024-11-20 13:54:39.146422] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:47.232 [2024-11-20 13:54:39.146436] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:47.232 [2024-11-20 13:54:39.146449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:47.232 [2024-11-20 13:54:39.146463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:47.232 [2024-11-20 13:54:39.146476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:47.232 [2024-11-20 13:54:39.146487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:47.232 [2024-11-20 13:54:39.146497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:47.232 [2024-11-20 13:54:39.146507] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:47.232 [2024-11-20 13:54:39.146522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:47.232 [2024-11-20 13:54:39.146533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.232 [2024-11-20 13:54:39.146544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:47.232 [2024-11-20 13:54:39.146555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:34:47.232 [2024-11-20 13:54:39.146565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.232 [2024-11-20 13:54:39.146652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.232 [2024-11-20 13:54:39.146682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:47.232 [2024-11-20 13:54:39.146694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:47.232 [2024-11-20 13:54:39.146704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.232 [2024-11-20 13:54:39.146880] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:47.232 [2024-11-20 13:54:39.146929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:47.232 [2024-11-20 13:54:39.146942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:47.232 [2024-11-20 13:54:39.146954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.146965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:47.232 [2024-11-20 13:54:39.146975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.146986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:47.232 [2024-11-20 13:54:39.146998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:47.232 [2024-11-20 13:54:39.147008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:47.232 [2024-11-20 13:54:39.147029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:47.232 [2024-11-20 13:54:39.147039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:47.232 [2024-11-20 13:54:39.147057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:47.232 [2024-11-20 13:54:39.147068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:47.232 [2024-11-20 13:54:39.147079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:47.232 [2024-11-20 13:54:39.147103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:47.232 [2024-11-20 13:54:39.147124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:47.232 [2024-11-20 13:54:39.147168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:47.232 [2024-11-20 13:54:39.147211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:47.232 [2024-11-20 13:54:39.147239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:47.232 [2024-11-20 13:54:39.147281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:47.232 [2024-11-20 13:54:39.147326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:47.232 [2024-11-20 13:54:39.147360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:47.232 [2024-11-20 13:54:39.147370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:47.232 [2024-11-20 13:54:39.147380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:47.232 [2024-11-20 13:54:39.147389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:47.232 [2024-11-20 13:54:39.147399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:47.232 [2024-11-20 13:54:39.147409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:47.232 [2024-11-20 13:54:39.147428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:47.232 [2024-11-20 13:54:39.147438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:47.232 [2024-11-20 13:54:39.147475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:47.232 [2024-11-20 13:54:39.147485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.232 [2024-11-20 13:54:39.147507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:47.232 [2024-11-20 13:54:39.147518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:47.232 [2024-11-20 13:54:39.147528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:47.232 [2024-11-20 13:54:39.147538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:47.232 [2024-11-20 13:54:39.147548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:47.232 [2024-11-20 13:54:39.147558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:47.232 [2024-11-20 13:54:39.147571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:47.232 [2024-11-20 13:54:39.147584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:47.232 [2024-11-20 13:54:39.147597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:47.232 [2024-11-20 13:54:39.147608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:47.232 [2024-11-20 13:54:39.147619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:47.232 [2024-11-20 13:54:39.147630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:47.232 [2024-11-20 13:54:39.147641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:47.232 [2024-11-20 13:54:39.147652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:47.232 [2024-11-20 13:54:39.147663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:47.232 [2024-11-20 13:54:39.147674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:47.232 [2024-11-20 13:54:39.147685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:47.232 [2024-11-20 13:54:39.147696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:47.232 [2024-11-20 13:54:39.147707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:47.233 [2024-11-20 13:54:39.147718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:47.233 [2024-11-20 13:54:39.147729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:47.233 [2024-11-20 13:54:39.147741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:47.233 [2024-11-20 13:54:39.147751] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:47.233 [2024-11-20 13:54:39.147775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:47.233 [2024-11-20 13:54:39.147787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:47.233 [2024-11-20 13:54:39.147798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:47.233 [2024-11-20 13:54:39.147809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:47.233 [2024-11-20 13:54:39.147820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:47.233 [2024-11-20 13:54:39.147832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.147844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:47.233 [2024-11-20 13:54:39.147855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:34:47.233 [2024-11-20 13:54:39.147866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.181520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.181599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:47.233 [2024-11-20 13:54:39.181630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.587 ms 00:34:47.233 [2024-11-20 13:54:39.181691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.181823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.181851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:47.233 [2024-11-20 13:54:39.181866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:34:47.233 [2024-11-20 13:54:39.181878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.232444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.232690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:47.233 [2024-11-20 13:54:39.232721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.444 ms 00:34:47.233 [2024-11-20 13:54:39.232735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.232813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.232829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:47.233 [2024-11-20 13:54:39.232851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:47.233 [2024-11-20 13:54:39.232862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.233278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.233298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:47.233 [2024-11-20 13:54:39.233311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:34:47.233 [2024-11-20 13:54:39.233322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.233488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.233507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:47.233 [2024-11-20 13:54:39.233526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:34:47.233 [2024-11-20 13:54:39.233536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.233 [2024-11-20 13:54:39.250540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.233 [2024-11-20 13:54:39.250589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:47.233 [2024-11-20 13:54:39.250622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.976 ms 00:34:47.233 [2024-11-20 13:54:39.250633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.267077] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:47.492 [2024-11-20 13:54:39.267343] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:47.492 [2024-11-20 13:54:39.267369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.267382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:47.492 [2024-11-20 13:54:39.267395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:34:47.492 [2024-11-20 13:54:39.267411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.297175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.297242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:47.492 [2024-11-20 13:54:39.297277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.703 ms 00:34:47.492 [2024-11-20 13:54:39.297288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.314032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.314079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:47.492 [2024-11-20 13:54:39.314113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.677 ms 00:34:47.492 [2024-11-20 13:54:39.314124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.330269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.330331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:47.492 [2024-11-20 13:54:39.330348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.096 ms 00:34:47.492 [2024-11-20 13:54:39.330360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.331259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.331355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:47.492 [2024-11-20 13:54:39.331383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:34:47.492 [2024-11-20 13:54:39.331396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.405119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.405219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:47.492 [2024-11-20 13:54:39.405262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.692 ms 00:34:47.492 [2024-11-20 13:54:39.405283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.417981] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:47.492 [2024-11-20 13:54:39.420587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.420618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:47.492 [2024-11-20 13:54:39.420651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.227 ms 00:34:47.492 [2024-11-20 13:54:39.420661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.420796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.420815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:47.492 [2024-11-20 13:54:39.420832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:47.492 [2024-11-20 13:54:39.420843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.421535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.421573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:47.492 [2024-11-20 13:54:39.421587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:34:47.492 [2024-11-20 13:54:39.421598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.421635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.421651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:47.492 [2024-11-20 13:54:39.421662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:47.492 [2024-11-20 13:54:39.421673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.421750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:47.492 [2024-11-20 13:54:39.421766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.421776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:47.492 [2024-11-20 13:54:39.421803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:34:47.492 [2024-11-20 13:54:39.421814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.453946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.453995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:47.492 [2024-11-20 13:54:39.454035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.096 ms 00:34:47.492 [2024-11-20 13:54:39.454047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.454140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.492 [2024-11-20 13:54:39.454158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:47.492 [2024-11-20 13:54:39.454171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:47.492 [2024-11-20 13:54:39.454182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.492 [2024-11-20 13:54:39.455402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.213 ms, result 0 00:34:48.877  [2024-11-20T13:54:41.853Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T13:54:42.788Z] Copying: 51/1024 [MB] (27 MBps) [2024-11-20T13:54:43.723Z] Copying: 78/1024 [MB] (26 MBps) [2024-11-20T13:54:45.098Z] Copying: 104/1024 [MB] (26 MBps) [2024-11-20T13:54:46.033Z] Copying: 130/1024 [MB] (25 MBps) [2024-11-20T13:54:46.991Z] Copying: 155/1024 [MB] (24 MBps) [2024-11-20T13:54:47.926Z] Copying: 179/1024 [MB] (24 MBps) [2024-11-20T13:54:48.862Z] Copying: 203/1024 [MB] (24 MBps) [2024-11-20T13:54:49.797Z] Copying: 229/1024 [MB] (25 MBps) [2024-11-20T13:54:50.735Z] Copying: 253/1024 [MB] (24 MBps) [2024-11-20T13:54:52.112Z] Copying: 277/1024 [MB] (23 MBps) [2024-11-20T13:54:52.679Z] Copying: 300/1024 [MB] (23 MBps) [2024-11-20T13:54:54.056Z] Copying: 325/1024 [MB] (24 MBps) [2024-11-20T13:54:54.991Z] Copying: 351/1024 [MB] (26 MBps) [2024-11-20T13:54:55.926Z] Copying: 377/1024 [MB] (26 MBps) [2024-11-20T13:54:56.863Z] Copying: 403/1024 [MB] (25 MBps) [2024-11-20T13:54:57.799Z] Copying: 431/1024 [MB] (27 MBps) [2024-11-20T13:54:58.734Z] Copying: 458/1024 [MB] (27 MBps) [2024-11-20T13:54:59.682Z] Copying: 483/1024 [MB] (24 MBps) [2024-11-20T13:55:01.056Z] Copying: 507/1024 [MB] (23 MBps) [2024-11-20T13:55:01.992Z] Copying: 532/1024 [MB] (25 MBps) [2024-11-20T13:55:02.926Z] Copying: 558/1024 [MB] (25 MBps) [2024-11-20T13:55:03.861Z] Copying: 583/1024 [MB] (25 MBps) [2024-11-20T13:55:04.825Z] Copying: 608/1024 [MB] (25 MBps) [2024-11-20T13:55:05.761Z] Copying: 635/1024 [MB] (26 MBps) [2024-11-20T13:55:06.699Z] Copying: 660/1024 [MB] (24 MBps) [2024-11-20T13:55:08.076Z] Copying: 684/1024 [MB] (24 MBps) [2024-11-20T13:55:09.012Z] Copying: 711/1024 [MB] (26 MBps) [2024-11-20T13:55:09.946Z] Copying: 736/1024 [MB] (25 MBps) [2024-11-20T13:55:10.957Z] Copying: 761/1024 [MB] (25 MBps) [2024-11-20T13:55:11.915Z] Copying: 788/1024 [MB] (26 MBps) [2024-11-20T13:55:12.850Z] Copying: 813/1024 [MB] (24 MBps) [2024-11-20T13:55:13.785Z] Copying: 837/1024 [MB] (24 MBps) [2024-11-20T13:55:14.722Z] Copying: 862/1024 [MB] (25 MBps) [2024-11-20T13:55:16.097Z] Copying: 887/1024 [MB] (24 MBps) [2024-11-20T13:55:17.033Z] Copying: 915/1024 [MB] (27 MBps) [2024-11-20T13:55:17.970Z] Copying: 941/1024 [MB] (26 MBps) [2024-11-20T13:55:18.906Z] Copying: 967/1024 [MB] (25 MBps) [2024-11-20T13:55:19.842Z] Copying: 995/1024 [MB] (27 MBps) [2024-11-20T13:55:19.842Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-20T13:55:20.101Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 13:55:19.980431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:19.981150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:28.062 [2024-11-20 13:55:19.981190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:28.062 [2024-11-20 13:55:19.981203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:19.981239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:28.062 [2024-11-20 13:55:19.984564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:19.984608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:28.062 [2024-11-20 13:55:19.984637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:35:28.062 [2024-11-20 13:55:19.984647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:19.984868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:19.984914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:28.062 [2024-11-20 13:55:19.984928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:35:28.062 [2024-11-20 13:55:19.984938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:19.988281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:19.988314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:28.062 [2024-11-20 13:55:19.988343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:35:28.062 [2024-11-20 13:55:19.988360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:19.994257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:19.994290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:28.062 [2024-11-20 13:55:19.994319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.874 ms 00:35:28.062 [2024-11-20 13:55:19.994330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:20.025263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:20.025318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:28.062 [2024-11-20 13:55:20.025352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.867 ms 00:35:28.062 [2024-11-20 13:55:20.025363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:20.043302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:20.043388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:28.062 [2024-11-20 13:55:20.043424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.900 ms 00:35:28.062 [2024-11-20 13:55:20.043437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:20.045483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:20.045530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:28.062 [2024-11-20 13:55:20.045547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:35:28.062 [2024-11-20 13:55:20.045559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.062 [2024-11-20 13:55:20.080031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.062 [2024-11-20 13:55:20.080085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:28.062 [2024-11-20 13:55:20.080103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.447 ms 00:35:28.062 [2024-11-20 13:55:20.080125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.323 [2024-11-20 13:55:20.113024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.323 [2024-11-20 13:55:20.113118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:28.323 [2024-11-20 13:55:20.113152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.863 ms 00:35:28.323 [2024-11-20 13:55:20.113163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.323 [2024-11-20 13:55:20.143180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.323 [2024-11-20 13:55:20.143389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:28.323 [2024-11-20 13:55:20.143418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.962 ms 00:35:28.323 [2024-11-20 13:55:20.143432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.323 [2024-11-20 13:55:20.174764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.323 [2024-11-20 13:55:20.174835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:28.323 [2024-11-20 13:55:20.174853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.246 ms 00:35:28.323 [2024-11-20 13:55:20.174865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.323 [2024-11-20 13:55:20.174913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:28.323 [2024-11-20 13:55:20.174944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:28.323 [2024-11-20 13:55:20.174963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:28.323 [2024-11-20 13:55:20.174976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.174988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:28.323 [2024-11-20 13:55:20.175223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:28.324 [2024-11-20 13:55:20.175975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.175987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.175999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:28.325 [2024-11-20 13:55:20.176149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:28.325 [2024-11-20 13:55:20.176160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f1fd82f-8fcb-4345-b093-a1e5a769d63e 00:35:28.325 [2024-11-20 13:55:20.176172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:28.325 [2024-11-20 13:55:20.176183] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:28.325 [2024-11-20 13:55:20.176194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:28.325 [2024-11-20 13:55:20.176205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:28.325 [2024-11-20 13:55:20.176216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:28.325 [2024-11-20 13:55:20.176227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:28.325 [2024-11-20 13:55:20.176252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:28.325 [2024-11-20 13:55:20.176263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:28.325 [2024-11-20 13:55:20.176273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:28.325 [2024-11-20 13:55:20.176284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.325 [2024-11-20 13:55:20.176296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:28.325 [2024-11-20 13:55:20.176307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:35:28.325 [2024-11-20 13:55:20.176323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.193737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.325 [2024-11-20 13:55:20.193786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:28.325 [2024-11-20 13:55:20.193803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.366 ms 00:35:28.325 [2024-11-20 13:55:20.193815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.194291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.325 [2024-11-20 13:55:20.194322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:28.325 [2024-11-20 13:55:20.194335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:35:28.325 [2024-11-20 13:55:20.194347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.238967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.325 [2024-11-20 13:55:20.239288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:28.325 [2024-11-20 13:55:20.239327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.325 [2024-11-20 13:55:20.239341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.239427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.325 [2024-11-20 13:55:20.239451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:28.325 [2024-11-20 13:55:20.239464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.325 [2024-11-20 13:55:20.239475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.239573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.325 [2024-11-20 13:55:20.239594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:28.325 [2024-11-20 13:55:20.239607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.325 [2024-11-20 13:55:20.239618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.239656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.325 [2024-11-20 13:55:20.239669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:28.325 [2024-11-20 13:55:20.239702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.325 [2024-11-20 13:55:20.239714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.325 [2024-11-20 13:55:20.342964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.325 [2024-11-20 13:55:20.343031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:28.325 [2024-11-20 13:55:20.343049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.325 [2024-11-20 13:55:20.343061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.423423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.423683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:28.591 [2024-11-20 13:55:20.423712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.423724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.423831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.423849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:28.591 [2024-11-20 13:55:20.423861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.423873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.423967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.423986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:28.591 [2024-11-20 13:55:20.423998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.424017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.424151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.424172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:28.591 [2024-11-20 13:55:20.424184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.424195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.424306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.424323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:28.591 [2024-11-20 13:55:20.424334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.424344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.424398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.424414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:28.591 [2024-11-20 13:55:20.424424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.424434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.424482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.591 [2024-11-20 13:55:20.424498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:28.591 [2024-11-20 13:55:20.424509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.591 [2024-11-20 13:55:20.424524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.591 [2024-11-20 13:55:20.424654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.226 ms, result 0 00:35:29.534 00:35:29.534 00:35:29.534 13:55:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:32.071 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:32.072 Process with pid 81226 is not found 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81226 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81226 ']' 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81226 00:35:32.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81226) - No such process 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81226 is not found' 00:35:32.072 13:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:32.072 Remove shared memory files 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:35:32.072 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:32.332 13:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:35:32.332 ************************************ 00:35:32.332 END TEST ftl_dirty_shutdown 00:35:32.332 ************************************ 00:35:32.332 00:35:32.332 real 3m44.123s 00:35:32.332 user 4m17.066s 00:35:32.332 sys 0m37.620s 00:35:32.332 13:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.332 13:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:32.332 13:55:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:32.332 13:55:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:32.332 13:55:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.332 13:55:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:32.332 ************************************ 00:35:32.332 START TEST ftl_upgrade_shutdown 00:35:32.332 ************************************ 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:32.332 * Looking for test storage... 00:35:32.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:32.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.332 --rc genhtml_branch_coverage=1 00:35:32.332 --rc genhtml_function_coverage=1 00:35:32.332 --rc genhtml_legend=1 00:35:32.332 --rc geninfo_all_blocks=1 00:35:32.332 --rc geninfo_unexecuted_blocks=1 00:35:32.332 00:35:32.332 ' 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:32.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.332 --rc genhtml_branch_coverage=1 00:35:32.332 --rc genhtml_function_coverage=1 00:35:32.332 --rc genhtml_legend=1 00:35:32.332 --rc geninfo_all_blocks=1 00:35:32.332 --rc geninfo_unexecuted_blocks=1 00:35:32.332 00:35:32.332 ' 00:35:32.332 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:32.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.332 --rc genhtml_branch_coverage=1 00:35:32.332 --rc genhtml_function_coverage=1 00:35:32.332 --rc genhtml_legend=1 00:35:32.332 --rc geninfo_all_blocks=1 00:35:32.332 --rc geninfo_unexecuted_blocks=1 00:35:32.333 00:35:32.333 ' 00:35:32.333 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:32.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.333 --rc genhtml_branch_coverage=1 00:35:32.333 --rc genhtml_function_coverage=1 00:35:32.333 --rc genhtml_legend=1 00:35:32.333 --rc geninfo_all_blocks=1 00:35:32.333 --rc geninfo_unexecuted_blocks=1 00:35:32.333 00:35:32.333 ' 00:35:32.333 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:32.333 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:35:32.333 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83545 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83545 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83545 ']' 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.593 13:55:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:32.593 [2024-11-20 13:55:24.501984] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:35:32.593 [2024-11-20 13:55:24.502392] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83545 ] 00:35:32.852 [2024-11-20 13:55:24.686422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.852 [2024-11-20 13:55:24.817166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:35:33.786 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:35:33.787 13:55:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:34.045 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:34.612 { 00:35:34.612 "name": "basen1", 00:35:34.612 "aliases": [ 00:35:34.612 "d25e0dc7-a69e-465b-8119-601840db224f" 00:35:34.612 ], 00:35:34.612 "product_name": "NVMe disk", 00:35:34.612 "block_size": 4096, 00:35:34.612 "num_blocks": 1310720, 00:35:34.612 "uuid": "d25e0dc7-a69e-465b-8119-601840db224f", 00:35:34.612 "numa_id": -1, 00:35:34.612 "assigned_rate_limits": { 00:35:34.612 "rw_ios_per_sec": 0, 00:35:34.612 "rw_mbytes_per_sec": 0, 00:35:34.612 "r_mbytes_per_sec": 0, 00:35:34.612 "w_mbytes_per_sec": 0 00:35:34.612 }, 00:35:34.612 "claimed": true, 00:35:34.612 "claim_type": "read_many_write_one", 00:35:34.612 "zoned": false, 00:35:34.612 "supported_io_types": { 00:35:34.612 "read": true, 00:35:34.612 "write": true, 00:35:34.612 "unmap": true, 00:35:34.612 "flush": true, 00:35:34.612 "reset": true, 00:35:34.612 "nvme_admin": true, 00:35:34.612 "nvme_io": true, 00:35:34.612 "nvme_io_md": false, 00:35:34.612 "write_zeroes": true, 00:35:34.612 "zcopy": false, 00:35:34.612 "get_zone_info": false, 00:35:34.612 "zone_management": false, 00:35:34.612 "zone_append": false, 00:35:34.612 "compare": true, 00:35:34.612 "compare_and_write": false, 00:35:34.612 "abort": true, 00:35:34.612 "seek_hole": false, 00:35:34.612 "seek_data": false, 00:35:34.612 "copy": true, 00:35:34.612 "nvme_iov_md": false 00:35:34.612 }, 00:35:34.612 "driver_specific": { 00:35:34.612 "nvme": [ 00:35:34.612 { 00:35:34.612 "pci_address": "0000:00:11.0", 00:35:34.612 "trid": { 00:35:34.612 "trtype": "PCIe", 00:35:34.612 "traddr": "0000:00:11.0" 00:35:34.612 }, 00:35:34.612 "ctrlr_data": { 00:35:34.612 "cntlid": 0, 00:35:34.612 "vendor_id": "0x1b36", 00:35:34.612 "model_number": "QEMU NVMe Ctrl", 00:35:34.612 "serial_number": "12341", 00:35:34.612 "firmware_revision": "8.0.0", 00:35:34.612 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:34.612 "oacs": { 00:35:34.612 "security": 0, 00:35:34.612 "format": 1, 00:35:34.612 "firmware": 0, 00:35:34.612 "ns_manage": 1 00:35:34.612 }, 00:35:34.612 "multi_ctrlr": false, 00:35:34.612 "ana_reporting": false 00:35:34.612 }, 00:35:34.612 "vs": { 00:35:34.612 "nvme_version": "1.4" 00:35:34.612 }, 00:35:34.612 "ns_data": { 00:35:34.612 "id": 1, 00:35:34.612 "can_share": false 00:35:34.612 } 00:35:34.612 } 00:35:34.612 ], 00:35:34.612 "mp_policy": "active_passive" 00:35:34.612 } 00:35:34.612 } 00:35:34.612 ]' 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:35:34.612 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:35:34.613 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:35:34.613 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:35:34.613 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:34.613 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:34.871 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=21310622-fff6-4544-87c2-a0b500c89d5e 00:35:34.871 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:35:34.871 13:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21310622-fff6-4544-87c2-a0b500c89d5e 00:35:35.130 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:35:35.389 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=751ef0f9-fcb1-4743-953c-d049272cfa85 00:35:35.389 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 751ef0f9-fcb1-4743-953c-d049272cfa85 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=24b48ec3-7226-494a-bc5c-c4a9cc3279ff 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 24b48ec3-7226-494a-bc5c-c4a9cc3279ff ]] 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 24b48ec3-7226-494a-bc5c-c4a9cc3279ff 5120 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=24b48ec3-7226-494a-bc5c-c4a9cc3279ff 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 24b48ec3-7226-494a-bc5c-c4a9cc3279ff 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=24b48ec3-7226-494a-bc5c-c4a9cc3279ff 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:35.953 13:55:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24b48ec3-7226-494a-bc5c-c4a9cc3279ff 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:36.212 { 00:35:36.212 "name": "24b48ec3-7226-494a-bc5c-c4a9cc3279ff", 00:35:36.212 "aliases": [ 00:35:36.212 "lvs/basen1p0" 00:35:36.212 ], 00:35:36.212 "product_name": "Logical Volume", 00:35:36.212 "block_size": 4096, 00:35:36.212 "num_blocks": 5242880, 00:35:36.212 "uuid": "24b48ec3-7226-494a-bc5c-c4a9cc3279ff", 00:35:36.212 "assigned_rate_limits": { 00:35:36.212 "rw_ios_per_sec": 0, 00:35:36.212 "rw_mbytes_per_sec": 0, 00:35:36.212 "r_mbytes_per_sec": 0, 00:35:36.212 "w_mbytes_per_sec": 0 00:35:36.212 }, 00:35:36.212 "claimed": false, 00:35:36.212 "zoned": false, 00:35:36.212 "supported_io_types": { 00:35:36.212 "read": true, 00:35:36.212 "write": true, 00:35:36.212 "unmap": true, 00:35:36.212 "flush": false, 00:35:36.212 "reset": true, 00:35:36.212 "nvme_admin": false, 00:35:36.212 "nvme_io": false, 00:35:36.212 "nvme_io_md": false, 00:35:36.212 "write_zeroes": true, 00:35:36.212 "zcopy": false, 00:35:36.212 "get_zone_info": false, 00:35:36.212 "zone_management": false, 00:35:36.212 "zone_append": false, 00:35:36.212 "compare": false, 00:35:36.212 "compare_and_write": false, 00:35:36.212 "abort": false, 00:35:36.212 "seek_hole": true, 00:35:36.212 "seek_data": true, 00:35:36.212 "copy": false, 00:35:36.212 "nvme_iov_md": false 00:35:36.212 }, 00:35:36.212 "driver_specific": { 00:35:36.212 "lvol": { 00:35:36.212 "lvol_store_uuid": "751ef0f9-fcb1-4743-953c-d049272cfa85", 00:35:36.212 "base_bdev": "basen1", 00:35:36.212 "thin_provision": true, 00:35:36.212 "num_allocated_clusters": 0, 00:35:36.212 "snapshot": false, 00:35:36.212 "clone": false, 00:35:36.212 "esnap_clone": false 00:35:36.212 } 00:35:36.212 } 00:35:36.212 } 00:35:36.212 ]' 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:35:36.212 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:35:36.472 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:35:36.472 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:35:36.472 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:35:37.039 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:35:37.039 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:35:37.039 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 24b48ec3-7226-494a-bc5c-c4a9cc3279ff -c cachen1p0 --l2p_dram_limit 2 00:35:37.039 [2024-11-20 13:55:29.070997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.039 [2024-11-20 13:55:29.071281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:37.040 [2024-11-20 13:55:29.071322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:37.040 [2024-11-20 13:55:29.071337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.040 [2024-11-20 13:55:29.071440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.040 [2024-11-20 13:55:29.071460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:37.040 [2024-11-20 13:55:29.071476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:35:37.040 [2024-11-20 13:55:29.071488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.040 [2024-11-20 13:55:29.071536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:37.040 [2024-11-20 13:55:29.072533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:37.040 [2024-11-20 13:55:29.072575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.040 [2024-11-20 13:55:29.072590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:37.040 [2024-11-20 13:55:29.072605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:35:37.040 [2024-11-20 13:55:29.072617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.040 [2024-11-20 13:55:29.072764] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 72fef36f-a842-4a7e-9084-f2ffcfb0b342 00:35:37.040 [2024-11-20 13:55:29.073932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.040 [2024-11-20 13:55:29.073981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:35:37.040 [2024-11-20 13:55:29.073999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:35:37.040 [2024-11-20 13:55:29.074014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.078714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.078776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:37.299 [2024-11-20 13:55:29.078809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.639 ms 00:35:37.299 [2024-11-20 13:55:29.078825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.078911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.078937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:37.299 [2024-11-20 13:55:29.078950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:35:37.299 [2024-11-20 13:55:29.078967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.079051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.079075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:37.299 [2024-11-20 13:55:29.079088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:35:37.299 [2024-11-20 13:55:29.079109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.079143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:37.299 [2024-11-20 13:55:29.084245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.084292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:37.299 [2024-11-20 13:55:29.084316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.107 ms 00:35:37.299 [2024-11-20 13:55:29.084329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.084372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.084388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:37.299 [2024-11-20 13:55:29.084404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:37.299 [2024-11-20 13:55:29.084416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.084501] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:35:37.299 [2024-11-20 13:55:29.084661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:37.299 [2024-11-20 13:55:29.084698] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:37.299 [2024-11-20 13:55:29.084714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:37.299 [2024-11-20 13:55:29.084731] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:37.299 [2024-11-20 13:55:29.084746] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:37.299 [2024-11-20 13:55:29.084773] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:37.299 [2024-11-20 13:55:29.084784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:37.299 [2024-11-20 13:55:29.084817] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:37.299 [2024-11-20 13:55:29.084828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:37.299 [2024-11-20 13:55:29.084858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.084869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:37.299 [2024-11-20 13:55:29.084941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:35:37.299 [2024-11-20 13:55:29.084957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.085077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.299 [2024-11-20 13:55:29.085093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:37.299 [2024-11-20 13:55:29.085107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:35:37.299 [2024-11-20 13:55:29.085130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.299 [2024-11-20 13:55:29.085262] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:37.299 [2024-11-20 13:55:29.085280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:37.299 [2024-11-20 13:55:29.085296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:37.299 [2024-11-20 13:55:29.085308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.299 [2024-11-20 13:55:29.085322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:37.299 [2024-11-20 13:55:29.085335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:37.299 [2024-11-20 13:55:29.085348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:37.299 [2024-11-20 13:55:29.085359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:37.299 [2024-11-20 13:55:29.085373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:37.299 [2024-11-20 13:55:29.085384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.299 [2024-11-20 13:55:29.085397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:37.299 [2024-11-20 13:55:29.085408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:37.299 [2024-11-20 13:55:29.085421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.299 [2024-11-20 13:55:29.085432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:37.299 [2024-11-20 13:55:29.085446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:37.299 [2024-11-20 13:55:29.085457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.299 [2024-11-20 13:55:29.085475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:37.299 [2024-11-20 13:55:29.085486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:37.300 [2024-11-20 13:55:29.085499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:37.300 [2024-11-20 13:55:29.085523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:37.300 [2024-11-20 13:55:29.085559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:37.300 [2024-11-20 13:55:29.085596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:37.300 [2024-11-20 13:55:29.085632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:37.300 [2024-11-20 13:55:29.085672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:37.300 [2024-11-20 13:55:29.085706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:37.300 [2024-11-20 13:55:29.085743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:37.300 [2024-11-20 13:55:29.085778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:37.300 [2024-11-20 13:55:29.085794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:37.300 [2024-11-20 13:55:29.085821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:37.300 [2024-11-20 13:55:29.085833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:37.300 [2024-11-20 13:55:29.085859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:37.300 [2024-11-20 13:55:29.085875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:37.300 [2024-11-20 13:55:29.085900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:37.300 [2024-11-20 13:55:29.085916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:37.300 [2024-11-20 13:55:29.085929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:37.300 [2024-11-20 13:55:29.085943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:37.300 [2024-11-20 13:55:29.085958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:37.300 [2024-11-20 13:55:29.085976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.085992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:37.300 [2024-11-20 13:55:29.086007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:37.300 [2024-11-20 13:55:29.086045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:37.300 [2024-11-20 13:55:29.086058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:37.300 [2024-11-20 13:55:29.086070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:37.300 [2024-11-20 13:55:29.086084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:37.300 [2024-11-20 13:55:29.086178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:37.300 [2024-11-20 13:55:29.086193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:37.300 [2024-11-20 13:55:29.086220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:37.300 [2024-11-20 13:55:29.086232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:37.300 [2024-11-20 13:55:29.086246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:37.300 [2024-11-20 13:55:29.086259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.300 [2024-11-20 13:55:29.086274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:37.300 [2024-11-20 13:55:29.086286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:35:37.300 [2024-11-20 13:55:29.086300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.300 [2024-11-20 13:55:29.086353] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:37.300 [2024-11-20 13:55:29.086383] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:39.202 [2024-11-20 13:55:31.098138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.098439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:39.202 [2024-11-20 13:55:31.098577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2011.798 ms 00:35:39.202 [2024-11-20 13:55:31.098638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.130937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.131199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:39.202 [2024-11-20 13:55:31.131358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.879 ms 00:35:39.202 [2024-11-20 13:55:31.131418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.131689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.131803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:39.202 [2024-11-20 13:55:31.131960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:35:39.202 [2024-11-20 13:55:31.132122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.174838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.175090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:39.202 [2024-11-20 13:55:31.175232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.468 ms 00:35:39.202 [2024-11-20 13:55:31.175357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.175477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.175539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:39.202 [2024-11-20 13:55:31.175652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:39.202 [2024-11-20 13:55:31.175774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.176229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.176389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:39.202 [2024-11-20 13:55:31.176512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:35:39.202 [2024-11-20 13:55:31.176574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.176738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.176858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:39.202 [2024-11-20 13:55:31.177000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:35:39.202 [2024-11-20 13:55:31.177141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.195440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.195677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:39.202 [2024-11-20 13:55:31.195804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.217 ms 00:35:39.202 [2024-11-20 13:55:31.195861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.202 [2024-11-20 13:55:31.209430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:39.202 [2024-11-20 13:55:31.210554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.202 [2024-11-20 13:55:31.210738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:39.202 [2024-11-20 13:55:31.210899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.493 ms 00:35:39.202 [2024-11-20 13:55:31.211034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.246531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.246818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:35:39.460 [2024-11-20 13:55:31.246970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.392 ms 00:35:39.460 [2024-11-20 13:55:31.247027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.247164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.247189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:39.460 [2024-11-20 13:55:31.247209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:35:39.460 [2024-11-20 13:55:31.247221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.277484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.277732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:35:39.460 [2024-11-20 13:55:31.277787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.168 ms 00:35:39.460 [2024-11-20 13:55:31.277802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.308172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.308221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:35:39.460 [2024-11-20 13:55:31.308243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.302 ms 00:35:39.460 [2024-11-20 13:55:31.308255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.309014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.309050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:39.460 [2024-11-20 13:55:31.309069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.700 ms 00:35:39.460 [2024-11-20 13:55:31.309083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.389967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.390026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:35:39.460 [2024-11-20 13:55:31.390067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 80.803 ms 00:35:39.460 [2024-11-20 13:55:31.390079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.422926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.422995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:35:39.460 [2024-11-20 13:55:31.423034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.735 ms 00:35:39.460 [2024-11-20 13:55:31.423047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.460 [2024-11-20 13:55:31.456593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.460 [2024-11-20 13:55:31.456843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:35:39.461 [2024-11-20 13:55:31.456897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.464 ms 00:35:39.461 [2024-11-20 13:55:31.456914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.461 [2024-11-20 13:55:31.490167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.461 [2024-11-20 13:55:31.490214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:39.461 [2024-11-20 13:55:31.490235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.188 ms 00:35:39.461 [2024-11-20 13:55:31.490248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.461 [2024-11-20 13:55:31.490310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.461 [2024-11-20 13:55:31.490344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:39.461 [2024-11-20 13:55:31.490366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:39.461 [2024-11-20 13:55:31.490378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.461 [2024-11-20 13:55:31.490499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.461 [2024-11-20 13:55:31.490518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:39.461 [2024-11-20 13:55:31.490537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:35:39.461 [2024-11-20 13:55:31.490548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.461 [2024-11-20 13:55:31.491637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2420.181 ms, result 0 00:35:39.461 { 00:35:39.461 "name": "ftl", 00:35:39.461 "uuid": "72fef36f-a842-4a7e-9084-f2ffcfb0b342" 00:35:39.461 } 00:35:39.718 13:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:35:39.976 [2024-11-20 13:55:31.815031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.976 13:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:35:40.234 13:55:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:35:40.491 [2024-11-20 13:55:32.419752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:40.491 13:55:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:35:40.749 [2024-11-20 13:55:32.737606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:40.749 13:55:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:35:41.318 Fill FTL, iteration 1 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83663 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83663 /var/tmp/spdk.tgt.sock 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83663 ']' 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:35:41.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.318 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:41.318 [2024-11-20 13:55:33.317121] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:35:41.318 [2024-11-20 13:55:33.317288] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83663 ] 00:35:41.577 [2024-11-20 13:55:33.511197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.836 [2024-11-20 13:55:33.636498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.403 13:55:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.403 13:55:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:42.403 13:55:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:35:42.971 ftln1 00:35:42.971 13:55:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:35:42.971 13:55:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83663 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83663 ']' 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83663 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83663 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83663' 00:35:43.230 killing process with pid 83663 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83663 00:35:43.230 13:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83663 00:35:45.136 13:55:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:35:45.136 13:55:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:45.395 [2024-11-20 13:55:37.226408] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:35:45.395 [2024-11-20 13:55:37.226553] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83716 ] 00:35:45.395 [2024-11-20 13:55:37.400783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.653 [2024-11-20 13:55:37.507592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.027  [2024-11-20T13:55:39.998Z] Copying: 203/1024 [MB] (203 MBps) [2024-11-20T13:55:41.374Z] Copying: 410/1024 [MB] (207 MBps) [2024-11-20T13:55:42.309Z] Copying: 620/1024 [MB] (210 MBps) [2024-11-20T13:55:43.244Z] Copying: 829/1024 [MB] (209 MBps) [2024-11-20T13:55:44.180Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:35:52.141 00:35:52.141 Calculate MD5 checksum, iteration 1 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:52.141 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:52.141 [2024-11-20 13:55:44.069825] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:35:52.141 [2024-11-20 13:55:44.070014] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:35:52.400 [2024-11-20 13:55:44.250736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.400 [2024-11-20 13:55:44.357382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.777  [2024-11-20T13:55:47.193Z] Copying: 477/1024 [MB] (477 MBps) [2024-11-20T13:55:47.193Z] Copying: 965/1024 [MB] (488 MBps) [2024-11-20T13:55:47.761Z] Copying: 1024/1024 [MB] (average 483 MBps) 00:35:55.722 00:35:55.722 13:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:55.722 13:55:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:58.253 Fill FTL, iteration 2 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7d58f57346a32a0d4f208a70c2e4edfa 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:58.253 13:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:58.253 [2024-11-20 13:55:50.083972] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:35:58.253 [2024-11-20 13:55:50.084131] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83849 ] 00:35:58.253 [2024-11-20 13:55:50.277064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.512 [2024-11-20 13:55:50.422010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.887  [2024-11-20T13:55:52.862Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-20T13:55:54.241Z] Copying: 416/1024 [MB] (203 MBps) [2024-11-20T13:55:55.178Z] Copying: 619/1024 [MB] (203 MBps) [2024-11-20T13:55:56.114Z] Copying: 827/1024 [MB] (208 MBps) [2024-11-20T13:55:57.049Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:36:05.010 00:36:05.010 Calculate MD5 checksum, iteration 2 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:05.010 13:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:05.010 [2024-11-20 13:55:56.973505] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:05.010 [2024-11-20 13:55:56.973664] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83919 ] 00:36:05.268 [2024-11-20 13:55:57.152892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.268 [2024-11-20 13:55:57.271024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.171  [2024-11-20T13:56:00.244Z] Copying: 498/1024 [MB] (498 MBps) [2024-11-20T13:56:00.244Z] Copying: 948/1024 [MB] (450 MBps) [2024-11-20T13:56:01.619Z] Copying: 1024/1024 [MB] (average 469 MBps) 00:36:09.580 00:36:09.580 13:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:36:09.580 13:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=287e39f2d5ed15797c4980a2b81d1d9e 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:12.113 [2024-11-20 13:56:03.821197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.113 [2024-11-20 13:56:03.821503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:12.113 [2024-11-20 13:56:03.821636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:12.113 [2024-11-20 13:56:03.821703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.113 [2024-11-20 13:56:03.821785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.113 [2024-11-20 13:56:03.822002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:12.113 [2024-11-20 13:56:03.822077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:12.113 [2024-11-20 13:56:03.822116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.113 [2024-11-20 13:56:03.822265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.113 [2024-11-20 13:56:03.822289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:12.113 [2024-11-20 13:56:03.822303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:12.113 [2024-11-20 13:56:03.822314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.113 [2024-11-20 13:56:03.822392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.190 ms, result 0 00:36:12.113 true 00:36:12.113 13:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:12.113 { 00:36:12.113 "name": "ftl", 00:36:12.113 "properties": [ 00:36:12.113 { 00:36:12.113 "name": "superblock_version", 00:36:12.113 "value": 5, 00:36:12.113 "read-only": true 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "name": "base_device", 00:36:12.113 "bands": [ 00:36:12.113 { 00:36:12.113 "id": 0, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 1, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 2, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 3, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 4, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 5, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 6, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 7, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 8, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 9, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 10, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 11, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 12, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 13, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 14, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 15, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 16, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 17, 00:36:12.113 "state": "FREE", 00:36:12.113 "validity": 0.0 00:36:12.113 } 00:36:12.113 ], 00:36:12.113 "read-only": true 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "name": "cache_device", 00:36:12.113 "type": "bdev", 00:36:12.113 "chunks": [ 00:36:12.113 { 00:36:12.113 "id": 0, 00:36:12.113 "state": "INACTIVE", 00:36:12.113 "utilization": 0.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 1, 00:36:12.113 "state": "CLOSED", 00:36:12.113 "utilization": 1.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 2, 00:36:12.113 "state": "CLOSED", 00:36:12.113 "utilization": 1.0 00:36:12.113 }, 00:36:12.113 { 00:36:12.113 "id": 3, 00:36:12.113 "state": "OPEN", 00:36:12.114 "utilization": 0.001953125 00:36:12.114 }, 00:36:12.114 { 00:36:12.114 "id": 4, 00:36:12.114 "state": "OPEN", 00:36:12.114 "utilization": 0.0 00:36:12.114 } 00:36:12.114 ], 00:36:12.114 "read-only": true 00:36:12.114 }, 00:36:12.114 { 00:36:12.114 "name": "verbose_mode", 00:36:12.114 "value": true, 00:36:12.114 "unit": "", 00:36:12.114 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:12.114 }, 00:36:12.114 { 00:36:12.114 "name": "prep_upgrade_on_shutdown", 00:36:12.114 "value": false, 00:36:12.114 "unit": "", 00:36:12.114 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:12.114 } 00:36:12.114 ] 00:36:12.114 } 00:36:12.114 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:36:12.373 [2024-11-20 13:56:04.365063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.373 [2024-11-20 13:56:04.365137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:12.373 [2024-11-20 13:56:04.365171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:36:12.373 [2024-11-20 13:56:04.365182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.373 [2024-11-20 13:56:04.365232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.373 [2024-11-20 13:56:04.365248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:12.373 [2024-11-20 13:56:04.365258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:12.373 [2024-11-20 13:56:04.365268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.373 [2024-11-20 13:56:04.365309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:12.373 [2024-11-20 13:56:04.365321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:12.373 [2024-11-20 13:56:04.365331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:36:12.373 [2024-11-20 13:56:04.365341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:12.373 [2024-11-20 13:56:04.365408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.366 ms, result 0 00:36:12.373 true 00:36:12.373 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:12.373 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:36:12.373 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:12.941 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:36:12.941 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:36:12.941 13:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:13.200 [2024-11-20 13:56:05.133383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.200 [2024-11-20 13:56:05.133443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:13.200 [2024-11-20 13:56:05.133464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:13.200 [2024-11-20 13:56:05.133476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.200 [2024-11-20 13:56:05.133513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.200 [2024-11-20 13:56:05.133528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:13.200 [2024-11-20 13:56:05.133540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:13.200 [2024-11-20 13:56:05.133552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.200 [2024-11-20 13:56:05.133580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.200 [2024-11-20 13:56:05.133594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:13.200 [2024-11-20 13:56:05.133605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:13.200 [2024-11-20 13:56:05.133616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.200 [2024-11-20 13:56:05.133692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.296 ms, result 0 00:36:13.200 true 00:36:13.200 13:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:13.459 { 00:36:13.459 "name": "ftl", 00:36:13.459 "properties": [ 00:36:13.459 { 00:36:13.459 "name": "superblock_version", 00:36:13.459 "value": 5, 00:36:13.459 "read-only": true 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "name": "base_device", 00:36:13.459 "bands": [ 00:36:13.459 { 00:36:13.459 "id": 0, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 1, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 2, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 3, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 4, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 5, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 6, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 7, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 8, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 9, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 10, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 11, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 12, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 13, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 14, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 15, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 16, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 17, 00:36:13.459 "state": "FREE", 00:36:13.459 "validity": 0.0 00:36:13.459 } 00:36:13.459 ], 00:36:13.459 "read-only": true 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "name": "cache_device", 00:36:13.459 "type": "bdev", 00:36:13.459 "chunks": [ 00:36:13.459 { 00:36:13.459 "id": 0, 00:36:13.459 "state": "INACTIVE", 00:36:13.459 "utilization": 0.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 1, 00:36:13.459 "state": "CLOSED", 00:36:13.459 "utilization": 1.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 2, 00:36:13.459 "state": "CLOSED", 00:36:13.459 "utilization": 1.0 00:36:13.459 }, 00:36:13.459 { 00:36:13.459 "id": 3, 00:36:13.459 "state": "OPEN", 00:36:13.459 "utilization": 0.001953125 00:36:13.460 }, 00:36:13.460 { 00:36:13.460 "id": 4, 00:36:13.460 "state": "OPEN", 00:36:13.460 "utilization": 0.0 00:36:13.460 } 00:36:13.460 ], 00:36:13.460 "read-only": true 00:36:13.460 }, 00:36:13.460 { 00:36:13.460 "name": "verbose_mode", 00:36:13.460 "value": true, 00:36:13.460 "unit": "", 00:36:13.460 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:13.460 }, 00:36:13.460 { 00:36:13.460 "name": "prep_upgrade_on_shutdown", 00:36:13.460 "value": true, 00:36:13.460 "unit": "", 00:36:13.460 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:13.460 } 00:36:13.460 ] 00:36:13.460 } 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83545 ]] 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83545 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83545 ']' 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83545 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83545 00:36:13.719 killing process with pid 83545 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83545' 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83545 00:36:13.719 13:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83545 00:36:14.657 [2024-11-20 13:56:06.544134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:14.657 [2024-11-20 13:56:06.560497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.657 [2024-11-20 13:56:06.560550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:14.657 [2024-11-20 13:56:06.560570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:14.657 [2024-11-20 13:56:06.560597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.657 [2024-11-20 13:56:06.560641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:14.657 [2024-11-20 13:56:06.564300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.657 [2024-11-20 13:56:06.564334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:14.657 [2024-11-20 13:56:06.564349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.628 ms 00:36:14.657 [2024-11-20 13:56:06.564361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.265462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.265535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:24.658 [2024-11-20 13:56:16.265557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9701.124 ms 00:36:24.658 [2024-11-20 13:56:16.265577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.266840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.266902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:24.658 [2024-11-20 13:56:16.266919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.239 ms 00:36:24.658 [2024-11-20 13:56:16.266931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.268181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.268215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:24.658 [2024-11-20 13:56:16.268232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:36:24.658 [2024-11-20 13:56:16.268250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.281234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.281278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:24.658 [2024-11-20 13:56:16.281296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.932 ms 00:36:24.658 [2024-11-20 13:56:16.281308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.289230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.289277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:24.658 [2024-11-20 13:56:16.289294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.875 ms 00:36:24.658 [2024-11-20 13:56:16.289307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.289443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.289465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:24.658 [2024-11-20 13:56:16.289486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:36:24.658 [2024-11-20 13:56:16.289498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.302363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.302404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:24.658 [2024-11-20 13:56:16.302421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.841 ms 00:36:24.658 [2024-11-20 13:56:16.302432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.315248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.315459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:24.658 [2024-11-20 13:56:16.315488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.774 ms 00:36:24.658 [2024-11-20 13:56:16.315500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.328451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.328518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:24.658 [2024-11-20 13:56:16.328564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.902 ms 00:36:24.658 [2024-11-20 13:56:16.328574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.341433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.658 [2024-11-20 13:56:16.341476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:24.658 [2024-11-20 13:56:16.341492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.716 ms 00:36:24.658 [2024-11-20 13:56:16.341503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.658 [2024-11-20 13:56:16.341560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:24.658 [2024-11-20 13:56:16.341583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:24.658 [2024-11-20 13:56:16.341597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:24.658 [2024-11-20 13:56:16.341623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:24.658 [2024-11-20 13:56:16.341635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:24.658 [2024-11-20 13:56:16.341800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:24.658 [2024-11-20 13:56:16.341810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 72fef36f-a842-4a7e-9084-f2ffcfb0b342 00:36:24.658 [2024-11-20 13:56:16.341821] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:24.658 [2024-11-20 13:56:16.341831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:36:24.658 [2024-11-20 13:56:16.341841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:36:24.658 [2024-11-20 13:56:16.341852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:36:24.658 [2024-11-20 13:56:16.341862] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:24.658 [2024-11-20 13:56:16.341878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:24.658 [2024-11-20 13:56:16.341923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:24.658 [2024-11-20 13:56:16.341934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:24.659 [2024-11-20 13:56:16.341945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:24.659 [2024-11-20 13:56:16.341956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.659 [2024-11-20 13:56:16.341971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:24.659 [2024-11-20 13:56:16.341984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:36:24.659 [2024-11-20 13:56:16.342013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.359511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.659 [2024-11-20 13:56:16.359678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:24.659 [2024-11-20 13:56:16.359705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.454 ms 00:36:24.659 [2024-11-20 13:56:16.359726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.360206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:24.659 [2024-11-20 13:56:16.360233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:24.659 [2024-11-20 13:56:16.360247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:36:24.659 [2024-11-20 13:56:16.360259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.417591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.417658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:24.659 [2024-11-20 13:56:16.417683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.417696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.417762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.417777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:24.659 [2024-11-20 13:56:16.417789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.417801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.417936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.417958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:24.659 [2024-11-20 13:56:16.417971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.417990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.418016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.418031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:24.659 [2024-11-20 13:56:16.418043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.418053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.526210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.526270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:24.659 [2024-11-20 13:56:16.526295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.526308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.614911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.614971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:24.659 [2024-11-20 13:56:16.614989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:24.659 [2024-11-20 13:56:16.615171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:24.659 [2024-11-20 13:56:16.615282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:24.659 [2024-11-20 13:56:16.615479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:24.659 [2024-11-20 13:56:16.615622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:24.659 [2024-11-20 13:56:16.615707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.615775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:24.659 [2024-11-20 13:56:16.615793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:24.659 [2024-11-20 13:56:16.615806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:24.659 [2024-11-20 13:56:16.615816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:24.659 [2024-11-20 13:56:16.616153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 10055.503 ms, result 0 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84148 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84148 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84148 ']' 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.225 13:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:31.225 [2024-11-20 13:56:22.640346] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:31.225 [2024-11-20 13:56:22.640525] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84148 ] 00:36:31.225 [2024-11-20 13:56:22.820551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.225 [2024-11-20 13:56:22.928877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.793 [2024-11-20 13:56:23.814235] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:31.793 [2024-11-20 13:56:23.814327] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:32.053 [2024-11-20 13:56:23.962160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.962221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:32.053 [2024-11-20 13:56:23.962255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:32.053 [2024-11-20 13:56:23.962266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.962339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.962357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:32.053 [2024-11-20 13:56:23.962369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:36:32.053 [2024-11-20 13:56:23.962393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.962431] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:32.053 [2024-11-20 13:56:23.963404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:32.053 [2024-11-20 13:56:23.963452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.963481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:32.053 [2024-11-20 13:56:23.963494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.034 ms 00:36:32.053 [2024-11-20 13:56:23.963504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.964836] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:32.053 [2024-11-20 13:56:23.979725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.979766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:32.053 [2024-11-20 13:56:23.979804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.891 ms 00:36:32.053 [2024-11-20 13:56:23.979815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.979930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.979951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:32.053 [2024-11-20 13:56:23.979963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:36:32.053 [2024-11-20 13:56:23.979974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.984645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.984860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:32.053 [2024-11-20 13:56:23.984932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.556 ms 00:36:32.053 [2024-11-20 13:56:23.984945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.985030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.985050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:32.053 [2024-11-20 13:56:23.985063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:36:32.053 [2024-11-20 13:56:23.985074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.985145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.985163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:32.053 [2024-11-20 13:56:23.985243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:36:32.053 [2024-11-20 13:56:23.985253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.985304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:32.053 [2024-11-20 13:56:23.989403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.989445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:32.053 [2024-11-20 13:56:23.989481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.108 ms 00:36:32.053 [2024-11-20 13:56:23.989503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.989551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.989570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:32.053 [2024-11-20 13:56:23.989587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:32.053 [2024-11-20 13:56:23.989600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.989644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:32.053 [2024-11-20 13:56:23.989689] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:32.053 [2024-11-20 13:56:23.989732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:32.053 [2024-11-20 13:56:23.989750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:32.053 [2024-11-20 13:56:23.989854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:32.053 [2024-11-20 13:56:23.989868] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:32.053 [2024-11-20 13:56:23.989920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:32.053 [2024-11-20 13:56:23.989953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:32.053 [2024-11-20 13:56:23.989966] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:32.053 [2024-11-20 13:56:23.989982] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:32.053 [2024-11-20 13:56:23.989993] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:32.053 [2024-11-20 13:56:23.990003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:32.053 [2024-11-20 13:56:23.990028] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:32.053 [2024-11-20 13:56:23.990039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.990050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:32.053 [2024-11-20 13:56:23.990061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:36:32.053 [2024-11-20 13:56:23.990072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.990200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.053 [2024-11-20 13:56:23.990219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:32.053 [2024-11-20 13:56:23.990230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.096 ms 00:36:32.053 [2024-11-20 13:56:23.990245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.053 [2024-11-20 13:56:23.990425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:32.053 [2024-11-20 13:56:23.990443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:32.053 [2024-11-20 13:56:23.990455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:32.053 [2024-11-20 13:56:23.990467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.053 [2024-11-20 13:56:23.990477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:32.054 [2024-11-20 13:56:23.990487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:32.054 [2024-11-20 13:56:23.990507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:32.054 [2024-11-20 13:56:23.990517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:32.054 [2024-11-20 13:56:23.990527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:32.054 [2024-11-20 13:56:23.990547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:32.054 [2024-11-20 13:56:23.990557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:32.054 [2024-11-20 13:56:23.990577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:32.054 [2024-11-20 13:56:23.990587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:32.054 [2024-11-20 13:56:23.990607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:32.054 [2024-11-20 13:56:23.990616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:32.054 [2024-11-20 13:56:23.990636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:32.054 [2024-11-20 13:56:23.990646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:32.054 [2024-11-20 13:56:23.990665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:32.054 [2024-11-20 13:56:23.990675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:32.054 [2024-11-20 13:56:23.990710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:32.054 [2024-11-20 13:56:23.990729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:32.054 [2024-11-20 13:56:23.990749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:32.054 [2024-11-20 13:56:23.990758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:32.054 [2024-11-20 13:56:23.990778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:32.054 [2024-11-20 13:56:23.990787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:32.054 [2024-11-20 13:56:23.990836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:32.054 [2024-11-20 13:56:23.990866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:32.054 [2024-11-20 13:56:23.990917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:32.054 [2024-11-20 13:56:23.990927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990937] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:32.054 [2024-11-20 13:56:23.990949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:32.054 [2024-11-20 13:56:23.990961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:32.054 [2024-11-20 13:56:23.990972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:32.054 [2024-11-20 13:56:23.990989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:32.054 [2024-11-20 13:56:23.990999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:32.054 [2024-11-20 13:56:23.991009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:32.054 [2024-11-20 13:56:23.991020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:32.054 [2024-11-20 13:56:23.991030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:32.054 [2024-11-20 13:56:23.991040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:32.054 [2024-11-20 13:56:23.991052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:32.054 [2024-11-20 13:56:23.991066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:32.054 [2024-11-20 13:56:23.991090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:32.054 [2024-11-20 13:56:23.991124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:32.054 [2024-11-20 13:56:23.991135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:32.054 [2024-11-20 13:56:23.991146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:32.054 [2024-11-20 13:56:23.991157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:32.054 [2024-11-20 13:56:23.991251] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:32.054 [2024-11-20 13:56:23.991263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:32.054 [2024-11-20 13:56:23.991294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:32.054 [2024-11-20 13:56:23.991305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:32.054 [2024-11-20 13:56:23.991316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:32.054 [2024-11-20 13:56:23.991328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:32.054 [2024-11-20 13:56:23.991339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:32.054 [2024-11-20 13:56:23.991350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.985 ms 00:36:32.054 [2024-11-20 13:56:23.991361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:32.054 [2024-11-20 13:56:23.991421] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:32.054 [2024-11-20 13:56:23.991598] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:33.958 [2024-11-20 13:56:25.859903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.860257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:33.958 [2024-11-20 13:56:25.860290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1868.493 ms 00:36:33.958 [2024-11-20 13:56:25.860319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.892147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.892225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:33.958 [2024-11-20 13:56:25.892246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.558 ms 00:36:33.958 [2024-11-20 13:56:25.892258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.892429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.892455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:33.958 [2024-11-20 13:56:25.892467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:36:33.958 [2024-11-20 13:56:25.892477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.929475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.929531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:33.958 [2024-11-20 13:56:25.929564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.936 ms 00:36:33.958 [2024-11-20 13:56:25.929580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.929650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.929666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:33.958 [2024-11-20 13:56:25.929677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:33.958 [2024-11-20 13:56:25.929686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.930181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.930230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:33.958 [2024-11-20 13:56:25.930244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:36:33.958 [2024-11-20 13:56:25.930254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.930332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.930363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:33.958 [2024-11-20 13:56:25.930390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:36:33.958 [2024-11-20 13:56:25.930415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.946491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.946536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:33.958 [2024-11-20 13:56:25.946572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.031 ms 00:36:33.958 [2024-11-20 13:56:25.946583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.962507] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:36:33.958 [2024-11-20 13:56:25.962549] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:33.958 [2024-11-20 13:56:25.962582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.962592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:36:33.958 [2024-11-20 13:56:25.962618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.850 ms 00:36:33.958 [2024-11-20 13:56:25.962628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.979738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.979979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:36:33.958 [2024-11-20 13:56:25.980040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.045 ms 00:36:33.958 [2024-11-20 13:56:25.980053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:33.958 [2024-11-20 13:56:25.994290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:33.958 [2024-11-20 13:56:25.994330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:36:33.958 [2024-11-20 13:56:25.994361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.184 ms 00:36:33.958 [2024-11-20 13:56:25.994371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.009619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.009712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:36:34.217 [2024-11-20 13:56:26.009746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.195 ms 00:36:34.217 [2024-11-20 13:56:26.009756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.010908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.010950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:34.217 [2024-11-20 13:56:26.010966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:36:34.217 [2024-11-20 13:56:26.010977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.088005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.088079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:34.217 [2024-11-20 13:56:26.088114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.996 ms 00:36:34.217 [2024-11-20 13:56:26.088124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.099134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:34.217 [2024-11-20 13:56:26.099882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.099955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:34.217 [2024-11-20 13:56:26.099975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.682 ms 00:36:34.217 [2024-11-20 13:56:26.099986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.100132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.100155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:36:34.217 [2024-11-20 13:56:26.100169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:34.217 [2024-11-20 13:56:26.100181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.100315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.217 [2024-11-20 13:56:26.100364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:34.217 [2024-11-20 13:56:26.100378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:36:34.217 [2024-11-20 13:56:26.100389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.217 [2024-11-20 13:56:26.100423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.218 [2024-11-20 13:56:26.100437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:34.218 [2024-11-20 13:56:26.100455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:34.218 [2024-11-20 13:56:26.100466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.218 [2024-11-20 13:56:26.100509] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:34.218 [2024-11-20 13:56:26.100526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.218 [2024-11-20 13:56:26.100536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:34.218 [2024-11-20 13:56:26.100548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:36:34.218 [2024-11-20 13:56:26.100558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.218 [2024-11-20 13:56:26.127655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.218 [2024-11-20 13:56:26.127878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:34.218 [2024-11-20 13:56:26.127918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.071 ms 00:36:34.218 [2024-11-20 13:56:26.127931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.218 [2024-11-20 13:56:26.128025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.218 [2024-11-20 13:56:26.128043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:34.218 [2024-11-20 13:56:26.128056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:36:34.218 [2024-11-20 13:56:26.128066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.218 [2024-11-20 13:56:26.129355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2166.635 ms, result 0 00:36:34.218 [2024-11-20 13:56:26.144236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.218 [2024-11-20 13:56:26.160263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:34.218 [2024-11-20 13:56:26.168928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:34.218 13:56:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.218 13:56:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:34.218 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:34.218 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:34.218 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:34.477 [2024-11-20 13:56:26.449178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.477 [2024-11-20 13:56:26.449273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:34.477 [2024-11-20 13:56:26.449309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:36:34.477 [2024-11-20 13:56:26.449326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.477 [2024-11-20 13:56:26.449377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.477 [2024-11-20 13:56:26.449392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:34.477 [2024-11-20 13:56:26.449418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:34.477 [2024-11-20 13:56:26.449443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.477 [2024-11-20 13:56:26.449469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.477 [2024-11-20 13:56:26.449481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:34.477 [2024-11-20 13:56:26.449491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:34.477 [2024-11-20 13:56:26.449500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.477 [2024-11-20 13:56:26.449572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.388 ms, result 0 00:36:34.477 true 00:36:34.477 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:34.736 { 00:36:34.736 "name": "ftl", 00:36:34.736 "properties": [ 00:36:34.736 { 00:36:34.736 "name": "superblock_version", 00:36:34.736 "value": 5, 00:36:34.736 "read-only": true 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "name": "base_device", 00:36:34.736 "bands": [ 00:36:34.736 { 00:36:34.736 "id": 0, 00:36:34.736 "state": "CLOSED", 00:36:34.736 "validity": 1.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 1, 00:36:34.736 "state": "CLOSED", 00:36:34.736 "validity": 1.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 2, 00:36:34.736 "state": "CLOSED", 00:36:34.736 "validity": 0.007843137254901933 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 3, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 4, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 5, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 6, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 7, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 8, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 9, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 10, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 11, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 12, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 13, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 14, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 15, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 16, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 17, 00:36:34.736 "state": "FREE", 00:36:34.736 "validity": 0.0 00:36:34.736 } 00:36:34.736 ], 00:36:34.736 "read-only": true 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "name": "cache_device", 00:36:34.736 "type": "bdev", 00:36:34.736 "chunks": [ 00:36:34.736 { 00:36:34.736 "id": 0, 00:36:34.736 "state": "INACTIVE", 00:36:34.736 "utilization": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 1, 00:36:34.736 "state": "OPEN", 00:36:34.736 "utilization": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 2, 00:36:34.736 "state": "OPEN", 00:36:34.736 "utilization": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 3, 00:36:34.736 "state": "FREE", 00:36:34.736 "utilization": 0.0 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "id": 4, 00:36:34.736 "state": "FREE", 00:36:34.736 "utilization": 0.0 00:36:34.736 } 00:36:34.736 ], 00:36:34.736 "read-only": true 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "name": "verbose_mode", 00:36:34.736 "value": true, 00:36:34.736 "unit": "", 00:36:34.736 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:34.736 }, 00:36:34.736 { 00:36:34.736 "name": "prep_upgrade_on_shutdown", 00:36:34.736 "value": false, 00:36:34.736 "unit": "", 00:36:34.736 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:34.736 } 00:36:34.736 ] 00:36:34.736 } 00:36:34.994 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:36:34.994 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:34.994 13:56:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:35.253 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:36:35.253 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:36:35.253 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:36:35.253 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:36:35.253 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:35.512 Validate MD5 checksum, iteration 1 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:35.512 13:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:35.512 [2024-11-20 13:56:27.520479] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:35.512 [2024-11-20 13:56:27.520932] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84205 ] 00:36:35.770 [2024-11-20 13:56:27.697599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.770 [2024-11-20 13:56:27.790475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.673  [2024-11-20T13:56:30.646Z] Copying: 456/1024 [MB] (456 MBps) [2024-11-20T13:56:30.646Z] Copying: 940/1024 [MB] (484 MBps) [2024-11-20T13:56:32.032Z] Copying: 1024/1024 [MB] (average 464 MBps) 00:36:39.993 00:36:39.993 13:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:39.993 13:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:42.539 Validate MD5 checksum, iteration 2 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7d58f57346a32a0d4f208a70c2e4edfa 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7d58f57346a32a0d4f208a70c2e4edfa != \7\d\5\8\f\5\7\3\4\6\a\3\2\a\0\d\4\f\2\0\8\a\7\0\c\2\e\4\e\d\f\a ]] 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:42.539 13:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:42.539 [2024-11-20 13:56:34.324067] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:42.539 [2024-11-20 13:56:34.324482] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84297 ] 00:36:42.540 [2024-11-20 13:56:34.518342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.798 [2024-11-20 13:56:34.702579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.703  [2024-11-20T13:56:37.678Z] Copying: 490/1024 [MB] (490 MBps) [2024-11-20T13:56:37.678Z] Copying: 936/1024 [MB] (446 MBps) [2024-11-20T13:56:39.582Z] Copying: 1024/1024 [MB] (average 463 MBps) 00:36:47.543 00:36:47.543 13:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:47.543 13:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=287e39f2d5ed15797c4980a2b81d1d9e 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 287e39f2d5ed15797c4980a2b81d1d9e != \2\8\7\e\3\9\f\2\d\5\e\d\1\5\7\9\7\c\4\9\8\0\a\2\b\8\1\d\1\d\9\e ]] 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84148 ]] 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84148 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84371 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84371 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84371 ']' 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.078 13:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:50.078 [2024-11-20 13:56:41.630079] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:50.078 [2024-11-20 13:56:41.630509] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84371 ] 00:36:50.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84148 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:36:50.078 [2024-11-20 13:56:41.809461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.078 [2024-11-20 13:56:41.912114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.015 [2024-11-20 13:56:42.691645] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:51.015 [2024-11-20 13:56:42.692006] bdev.c:8353:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:51.015 [2024-11-20 13:56:42.844160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.844226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:51.015 [2024-11-20 13:56:42.844248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:51.015 [2024-11-20 13:56:42.844261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.844336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.844355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:51.015 [2024-11-20 13:56:42.844368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:36:51.015 [2024-11-20 13:56:42.844378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.844420] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:51.015 [2024-11-20 13:56:42.845401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:51.015 [2024-11-20 13:56:42.845448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.845462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:51.015 [2024-11-20 13:56:42.845475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:36:51.015 [2024-11-20 13:56:42.845486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.845990] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:51.015 [2024-11-20 13:56:42.867657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.867886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:51.015 [2024-11-20 13:56:42.867918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.668 ms 00:36:51.015 [2024-11-20 13:56:42.867932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.880346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.880391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:51.015 [2024-11-20 13:56:42.880414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:36:51.015 [2024-11-20 13:56:42.880426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.880943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.880970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:51.015 [2024-11-20 13:56:42.880985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:36:51.015 [2024-11-20 13:56:42.880997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.881070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.881089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:51.015 [2024-11-20 13:56:42.881101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:36:51.015 [2024-11-20 13:56:42.881112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.881148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.881162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:51.015 [2024-11-20 13:56:42.881174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:51.015 [2024-11-20 13:56:42.881185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.881218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:51.015 [2024-11-20 13:56:42.885327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.885366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:51.015 [2024-11-20 13:56:42.885383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.116 ms 00:36:51.015 [2024-11-20 13:56:42.885395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.885435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.885449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:51.015 [2024-11-20 13:56:42.885466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:51.015 [2024-11-20 13:56:42.885477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.885525] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:51.015 [2024-11-20 13:56:42.885555] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:51.015 [2024-11-20 13:56:42.885597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:51.015 [2024-11-20 13:56:42.885620] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:51.015 [2024-11-20 13:56:42.885732] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:51.015 [2024-11-20 13:56:42.885748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:51.015 [2024-11-20 13:56:42.885763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:51.015 [2024-11-20 13:56:42.885777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:51.015 [2024-11-20 13:56:42.885791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:51.015 [2024-11-20 13:56:42.885803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:51.015 [2024-11-20 13:56:42.885814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:51.015 [2024-11-20 13:56:42.885824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:51.015 [2024-11-20 13:56:42.885836] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:51.015 [2024-11-20 13:56:42.885847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.885864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:51.015 [2024-11-20 13:56:42.885893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:36:51.015 [2024-11-20 13:56:42.885905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.886002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.015 [2024-11-20 13:56:42.886016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:51.015 [2024-11-20 13:56:42.886027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:36:51.015 [2024-11-20 13:56:42.886039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.015 [2024-11-20 13:56:42.886155] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:51.015 [2024-11-20 13:56:42.886171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:51.015 [2024-11-20 13:56:42.886189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:51.015 [2024-11-20 13:56:42.886223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:51.015 [2024-11-20 13:56:42.886244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:51.015 [2024-11-20 13:56:42.886257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:51.015 [2024-11-20 13:56:42.886267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:51.015 [2024-11-20 13:56:42.886288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:51.015 [2024-11-20 13:56:42.886298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:51.015 [2024-11-20 13:56:42.886321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:51.015 [2024-11-20 13:56:42.886331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:51.015 [2024-11-20 13:56:42.886352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:51.015 [2024-11-20 13:56:42.886362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:51.015 [2024-11-20 13:56:42.886383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:51.015 [2024-11-20 13:56:42.886394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:51.015 [2024-11-20 13:56:42.886427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:51.015 [2024-11-20 13:56:42.886438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:51.015 [2024-11-20 13:56:42.886459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:51.015 [2024-11-20 13:56:42.886470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:51.015 [2024-11-20 13:56:42.886490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:51.015 [2024-11-20 13:56:42.886500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:51.015 [2024-11-20 13:56:42.886521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:51.015 [2024-11-20 13:56:42.886535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:51.015 [2024-11-20 13:56:42.886561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:51.015 [2024-11-20 13:56:42.886572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:51.015 [2024-11-20 13:56:42.886593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:51.015 [2024-11-20 13:56:42.886633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:51.015 [2024-11-20 13:56:42.886643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.015 [2024-11-20 13:56:42.886653] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:51.015 [2024-11-20 13:56:42.886666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:51.015 [2024-11-20 13:56:42.886679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:51.016 [2024-11-20 13:56:42.886690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:51.016 [2024-11-20 13:56:42.886702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:51.016 [2024-11-20 13:56:42.886713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:51.016 [2024-11-20 13:56:42.886723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:51.016 [2024-11-20 13:56:42.886733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:51.016 [2024-11-20 13:56:42.886744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:51.016 [2024-11-20 13:56:42.886754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:51.016 [2024-11-20 13:56:42.886766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:51.016 [2024-11-20 13:56:42.886780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.886793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:51.016 [2024-11-20 13:56:42.886818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.886832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.886844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:51.016 [2024-11-20 13:56:42.886855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:51.016 [2024-11-20 13:56:42.887151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:51.016 [2024-11-20 13:56:42.887236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:51.016 [2024-11-20 13:56:42.887410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.887473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.887531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.887587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.887805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.887965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.888036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:51.016 [2024-11-20 13:56:42.888183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:51.016 [2024-11-20 13:56:42.888252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.888391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:51.016 [2024-11-20 13:56:42.888453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:51.016 [2024-11-20 13:56:42.888600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:51.016 [2024-11-20 13:56:42.888662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:51.016 [2024-11-20 13:56:42.888804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.888848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:51.016 [2024-11-20 13:56:42.888940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.718 ms 00:36:51.016 [2024-11-20 13:56:42.889044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.921516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.921576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:51.016 [2024-11-20 13:56:42.921596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.377 ms 00:36:51.016 [2024-11-20 13:56:42.921609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.921681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.921696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:51.016 [2024-11-20 13:56:42.921716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:51.016 [2024-11-20 13:56:42.921727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.963918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.963981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:51.016 [2024-11-20 13:56:42.964002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.095 ms 00:36:51.016 [2024-11-20 13:56:42.964014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.964084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.964100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:51.016 [2024-11-20 13:56:42.964113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:51.016 [2024-11-20 13:56:42.964125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.964360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.964378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:51.016 [2024-11-20 13:56:42.964392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.103 ms 00:36:51.016 [2024-11-20 13:56:42.964403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.964461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.964476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:51.016 [2024-11-20 13:56:42.964488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:36:51.016 [2024-11-20 13:56:42.964499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.983470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.983727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:51.016 [2024-11-20 13:56:42.983757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.906 ms 00:36:51.016 [2024-11-20 13:56:42.983778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:42.983970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:42.983996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:51.016 [2024-11-20 13:56:42.984010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:36:51.016 [2024-11-20 13:56:42.984021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:43.016901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:43.016958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:51.016 [2024-11-20 13:56:43.016977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.845 ms 00:36:51.016 [2024-11-20 13:56:43.016990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.016 [2024-11-20 13:56:43.030415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.016 [2024-11-20 13:56:43.030461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:51.016 [2024-11-20 13:56:43.030492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.775 ms 00:36:51.016 [2024-11-20 13:56:43.030504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.106048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.106123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:51.276 [2024-11-20 13:56:43.106153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.456 ms 00:36:51.276 [2024-11-20 13:56:43.106166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.106397] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:51.276 [2024-11-20 13:56:43.106548] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:51.276 [2024-11-20 13:56:43.106677] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:51.276 [2024-11-20 13:56:43.106799] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:51.276 [2024-11-20 13:56:43.106826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.106838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:51.276 [2024-11-20 13:56:43.106851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:36:51.276 [2024-11-20 13:56:43.106863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.107017] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:51.276 [2024-11-20 13:56:43.107039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.107056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:51.276 [2024-11-20 13:56:43.107069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:36:51.276 [2024-11-20 13:56:43.107080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.127587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.127834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:51.276 [2024-11-20 13:56:43.127864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.474 ms 00:36:51.276 [2024-11-20 13:56:43.127877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.140205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.140412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:51.276 [2024-11-20 13:56:43.140441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:51.276 [2024-11-20 13:56:43.140458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.276 [2024-11-20 13:56:43.140587] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:51.276 [2024-11-20 13:56:43.140739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.276 [2024-11-20 13:56:43.140755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:51.276 [2024-11-20 13:56:43.140768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.155 ms 00:36:51.276 [2024-11-20 13:56:43.140780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.842 [2024-11-20 13:56:43.682099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.842 [2024-11-20 13:56:43.682218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:51.842 [2024-11-20 13:56:43.682240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 540.163 ms 00:36:51.842 [2024-11-20 13:56:43.682253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.842 [2024-11-20 13:56:43.687242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.842 [2024-11-20 13:56:43.687428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:51.842 [2024-11-20 13:56:43.687458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.909 ms 00:36:51.842 [2024-11-20 13:56:43.687487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.842 [2024-11-20 13:56:43.687845] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:51.842 [2024-11-20 13:56:43.687890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.842 [2024-11-20 13:56:43.687903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:51.842 [2024-11-20 13:56:43.687939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:36:51.842 [2024-11-20 13:56:43.687954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.842 [2024-11-20 13:56:43.688000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.842 [2024-11-20 13:56:43.688018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:51.842 [2024-11-20 13:56:43.688031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:51.842 [2024-11-20 13:56:43.688043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:51.842 [2024-11-20 13:56:43.688102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 547.517 ms, result 0 00:36:51.842 [2024-11-20 13:56:43.688160] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:51.842 [2024-11-20 13:56:43.688247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:51.842 [2024-11-20 13:56:43.688261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:51.842 [2024-11-20 13:56:43.688273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:36:51.842 [2024-11-20 13:56:43.688284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.219717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.220074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:52.410 [2024-11-20 13:56:44.220110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 530.254 ms 00:36:52.410 [2024-11-20 13:56:44.220123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.225038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.225084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:52.410 [2024-11-20 13:56:44.225103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.794 ms 00:36:52.410 [2024-11-20 13:56:44.225115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.225474] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:52.410 [2024-11-20 13:56:44.225502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.225514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:52.410 [2024-11-20 13:56:44.225526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:36:52.410 [2024-11-20 13:56:44.225537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.225583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.225601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:52.410 [2024-11-20 13:56:44.225613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:52.410 [2024-11-20 13:56:44.225624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.225676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 537.516 ms, result 0 00:36:52.410 [2024-11-20 13:56:44.225733] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:52.410 [2024-11-20 13:56:44.225749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:52.410 [2024-11-20 13:56:44.225763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.225775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:52.410 [2024-11-20 13:56:44.225787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1085.207 ms 00:36:52.410 [2024-11-20 13:56:44.225799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.225840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.225855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:52.410 [2024-11-20 13:56:44.225900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:52.410 [2024-11-20 13:56:44.225913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.239414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:52.410 [2024-11-20 13:56:44.239580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.239601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:52.410 [2024-11-20 13:56:44.239615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.642 ms 00:36:52.410 [2024-11-20 13:56:44.239627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.240413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.240455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:52.410 [2024-11-20 13:56:44.240480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:36:52.410 [2024-11-20 13:56:44.240493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:52.410 [2024-11-20 13:56:44.243106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.535 ms 00:36:52.410 [2024-11-20 13:56:44.243117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:52.410 [2024-11-20 13:56:44.243206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:52.410 [2024-11-20 13:56:44.243225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:52.410 [2024-11-20 13:56:44.243383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:52.410 [2024-11-20 13:56:44.243394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:52.410 [2024-11-20 13:56:44.243448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:52.410 [2024-11-20 13:56:44.243459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243506] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:52.410 [2024-11-20 13:56:44.243523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:52.410 [2024-11-20 13:56:44.243547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:52.410 [2024-11-20 13:56:44.243558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.243624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:52.410 [2024-11-20 13:56:44.243641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:52.410 [2024-11-20 13:56:44.243653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:36:52.410 [2024-11-20 13:56:44.243664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:52.410 [2024-11-20 13:56:44.244745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1400.098 ms, result 0 00:36:52.410 [2024-11-20 13:56:44.260217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.411 [2024-11-20 13:56:44.276285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:52.411 [2024-11-20 13:56:44.285809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:52.411 Validate MD5 checksum, iteration 1 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:52.411 13:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:52.670 [2024-11-20 13:56:44.498012] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:52.670 [2024-11-20 13:56:44.498418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84406 ] 00:36:52.670 [2024-11-20 13:56:44.683574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.928 [2024-11-20 13:56:44.812339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.923  [2024-11-20T13:56:47.530Z] Copying: 480/1024 [MB] (480 MBps) [2024-11-20T13:56:47.788Z] Copying: 933/1024 [MB] (453 MBps) [2024-11-20T13:56:49.162Z] Copying: 1024/1024 [MB] (average 468 MBps) 00:36:57.123 00:36:57.123 13:56:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:57.123 13:56:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:59.656 Validate MD5 checksum, iteration 2 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7d58f57346a32a0d4f208a70c2e4edfa 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7d58f57346a32a0d4f208a70c2e4edfa != \7\d\5\8\f\5\7\3\4\6\a\3\2\a\0\d\4\f\2\0\8\a\7\0\c\2\e\4\e\d\f\a ]] 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:59.656 13:56:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:59.656 [2024-11-20 13:56:51.193282] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:36:59.656 [2024-11-20 13:56:51.193764] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84481 ] 00:36:59.656 [2024-11-20 13:56:51.384970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.656 [2024-11-20 13:56:51.511809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.560  [2024-11-20T13:56:54.166Z] Copying: 477/1024 [MB] (477 MBps) [2024-11-20T13:56:54.424Z] Copying: 965/1024 [MB] (488 MBps) [2024-11-20T13:56:55.801Z] Copying: 1024/1024 [MB] (average 483 MBps) 00:37:03.762 00:37:03.762 13:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:03.762 13:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:05.669 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=287e39f2d5ed15797c4980a2b81d1d9e 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 287e39f2d5ed15797c4980a2b81d1d9e != \2\8\7\e\3\9\f\2\d\5\e\d\1\5\7\9\7\c\4\9\8\0\a\2\b\8\1\d\1\d\9\e ]] 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84371 ]] 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84371 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84371 ']' 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84371 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84371 00:37:05.670 killing process with pid 84371 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84371' 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84371 00:37:05.670 13:56:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84371 00:37:06.647 [2024-11-20 13:56:58.474424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:06.647 [2024-11-20 13:56:58.488387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.488428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:06.647 [2024-11-20 13:56:58.488461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:06.647 [2024-11-20 13:56:58.488472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.488500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:06.647 [2024-11-20 13:56:58.491551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.491761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:06.647 [2024-11-20 13:56:58.491797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.029 ms 00:37:06.647 [2024-11-20 13:56:58.491809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.492097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.492118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:06.647 [2024-11-20 13:56:58.492130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:37:06.647 [2024-11-20 13:56:58.492172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.493519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.493556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:06.647 [2024-11-20 13:56:58.493586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.324 ms 00:37:06.647 [2024-11-20 13:56:58.493597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.494804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.495074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:06.647 [2024-11-20 13:56:58.495103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.144 ms 00:37:06.647 [2024-11-20 13:56:58.495115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.506783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.506936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:06.647 [2024-11-20 13:56:58.506958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.598 ms 00:37:06.647 [2024-11-20 13:56:58.506986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.513340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.513574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:06.647 [2024-11-20 13:56:58.513603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.284 ms 00:37:06.647 [2024-11-20 13:56:58.513617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.513709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.513742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:06.647 [2024-11-20 13:56:58.513754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:37:06.647 [2024-11-20 13:56:58.513765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.524914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.524949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:37:06.647 [2024-11-20 13:56:58.524978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.119 ms 00:37:06.647 [2024-11-20 13:56:58.524988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.536049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.536084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:37:06.647 [2024-11-20 13:56:58.536114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.023 ms 00:37:06.647 [2024-11-20 13:56:58.536123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.546796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.546861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:06.647 [2024-11-20 13:56:58.546907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.635 ms 00:37:06.647 [2024-11-20 13:56:58.546918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.557540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.557574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:06.647 [2024-11-20 13:56:58.557603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.537 ms 00:37:06.647 [2024-11-20 13:56:58.557612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.557649] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:06.647 [2024-11-20 13:56:58.557669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:06.647 [2024-11-20 13:56:58.557681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:06.647 [2024-11-20 13:56:58.557691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:06.647 [2024-11-20 13:56:58.557701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:06.647 [2024-11-20 13:56:58.557850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:06.647 [2024-11-20 13:56:58.557859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 72fef36f-a842-4a7e-9084-f2ffcfb0b342 00:37:06.647 [2024-11-20 13:56:58.557886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:06.647 [2024-11-20 13:56:58.557914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:37:06.647 [2024-11-20 13:56:58.557923] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:37:06.647 [2024-11-20 13:56:58.557933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:37:06.647 [2024-11-20 13:56:58.557942] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:06.647 [2024-11-20 13:56:58.557952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:06.647 [2024-11-20 13:56:58.557961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:06.647 [2024-11-20 13:56:58.557970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:06.647 [2024-11-20 13:56:58.557978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:06.647 [2024-11-20 13:56:58.558006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.558023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:06.647 [2024-11-20 13:56:58.558036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:37:06.647 [2024-11-20 13:56:58.558046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.647 [2024-11-20 13:56:58.574490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.647 [2024-11-20 13:56:58.574525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:06.647 [2024-11-20 13:56:58.574540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.405 ms 00:37:06.648 [2024-11-20 13:56:58.574551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.648 [2024-11-20 13:56:58.575008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.648 [2024-11-20 13:56:58.575028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:06.648 [2024-11-20 13:56:58.575040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:37:06.648 [2024-11-20 13:56:58.575052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.648 [2024-11-20 13:56:58.622641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.648 [2024-11-20 13:56:58.623000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:06.648 [2024-11-20 13:56:58.623134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.648 [2024-11-20 13:56:58.623320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.648 [2024-11-20 13:56:58.623424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.648 [2024-11-20 13:56:58.623539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:06.648 [2024-11-20 13:56:58.623651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.648 [2024-11-20 13:56:58.623700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.648 [2024-11-20 13:56:58.623935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.648 [2024-11-20 13:56:58.624084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:06.648 [2024-11-20 13:56:58.624203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.648 [2024-11-20 13:56:58.624317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.648 [2024-11-20 13:56:58.624385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.648 [2024-11-20 13:56:58.624581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:06.648 [2024-11-20 13:56:58.624633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.648 [2024-11-20 13:56:58.624669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.907 [2024-11-20 13:56:58.713483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.907 [2024-11-20 13:56:58.713547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:06.907 [2024-11-20 13:56:58.713579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.907 [2024-11-20 13:56:58.713590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.907 [2024-11-20 13:56:58.786238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.907 [2024-11-20 13:56:58.786306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:06.907 [2024-11-20 13:56:58.786341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.907 [2024-11-20 13:56:58.786352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.907 [2024-11-20 13:56:58.786500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.907 [2024-11-20 13:56:58.786516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:06.907 [2024-11-20 13:56:58.786526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.907 [2024-11-20 13:56:58.786536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.907 [2024-11-20 13:56:58.786588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.907 [2024-11-20 13:56:58.786603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:06.908 [2024-11-20 13:56:58.786621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.908 [2024-11-20 13:56:58.786644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.908 [2024-11-20 13:56:58.786759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.908 [2024-11-20 13:56:58.786775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:06.908 [2024-11-20 13:56:58.786786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.908 [2024-11-20 13:56:58.786795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.908 [2024-11-20 13:56:58.786886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.908 [2024-11-20 13:56:58.786943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:06.908 [2024-11-20 13:56:58.786959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.908 [2024-11-20 13:56:58.786977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.908 [2024-11-20 13:56:58.787023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.908 [2024-11-20 13:56:58.787037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:06.908 [2024-11-20 13:56:58.787049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.908 [2024-11-20 13:56:58.787060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.908 [2024-11-20 13:56:58.787112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:06.908 [2024-11-20 13:56:58.787127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:06.908 [2024-11-20 13:56:58.787160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:06.908 [2024-11-20 13:56:58.787171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.908 [2024-11-20 13:56:58.787353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 298.898 ms, result 0 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:07.843 Remove shared memory files 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84148 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:07.843 ************************************ 00:37:07.843 END TEST ftl_upgrade_shutdown 00:37:07.843 ************************************ 00:37:07.843 00:37:07.843 real 1m35.700s 00:37:07.843 user 2m17.450s 00:37:07.843 sys 0m23.079s 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.843 13:56:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@14 -- # killprocess 76897 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 76897 ']' 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@958 -- # kill -0 76897 00:37:08.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76897) - No such process 00:37:08.102 Process with pid 76897 is not found 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76897 is not found' 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84601 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84601 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 84601 ']' 00:37:08.102 13:56:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.102 13:56:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:08.102 [2024-11-20 13:57:00.032841] Starting SPDK v25.01-pre git sha1 b6a8866f3 / DPDK 24.03.0 initialization... 00:37:08.102 [2024-11-20 13:57:00.033283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84601 ] 00:37:08.360 [2024-11-20 13:57:00.214549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.360 [2024-11-20 13:57:00.315405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.297 13:57:01 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.297 13:57:01 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:09.297 13:57:01 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:09.556 nvme0n1 00:37:09.556 13:57:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:37:09.556 13:57:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:09.556 13:57:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:09.815 13:57:01 ftl -- ftl/common.sh@28 -- # stores=751ef0f9-fcb1-4743-953c-d049272cfa85 00:37:09.815 13:57:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:37:09.815 13:57:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 751ef0f9-fcb1-4743-953c-d049272cfa85 00:37:10.383 13:57:02 ftl -- ftl/ftl.sh@23 -- # killprocess 84601 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 84601 ']' 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@958 -- # kill -0 84601 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@959 -- # uname 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84601 00:37:10.383 killing process with pid 84601 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84601' 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@973 -- # kill 84601 00:37:10.383 13:57:02 ftl -- common/autotest_common.sh@978 -- # wait 84601 00:37:12.287 13:57:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:12.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:12.546 Waiting for block devices as requested 00:37:12.804 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:12.804 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:12.804 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:13.064 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:18.342 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:18.342 13:57:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:37:18.342 Remove shared memory files 00:37:18.342 13:57:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:18.342 13:57:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:37:18.342 13:57:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:37:18.342 13:57:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:37:18.342 13:57:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:18.342 13:57:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:37:18.342 ************************************ 00:37:18.342 END TEST ftl 00:37:18.342 ************************************ 00:37:18.342 00:37:18.342 real 11m44.340s 00:37:18.342 user 14m54.973s 00:37:18.342 sys 1m33.579s 00:37:18.342 13:57:09 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.342 13:57:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:18.342 13:57:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:18.343 13:57:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:18.343 13:57:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:18.343 13:57:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:18.343 13:57:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:18.343 13:57:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:18.343 13:57:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:18.343 13:57:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:18.343 13:57:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:18.343 13:57:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:18.343 13:57:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.343 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:37:18.343 13:57:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:18.343 13:57:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:18.343 13:57:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:18.343 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:37:19.719 INFO: APP EXITING 00:37:19.719 INFO: killing all VMs 00:37:19.719 INFO: killing vhost app 00:37:19.719 INFO: EXIT DONE 00:37:19.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:20.546 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:20.546 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:20.546 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:37:20.546 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:37:20.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:21.373 Cleaning 00:37:21.373 Removing: /var/run/dpdk/spdk0/config 00:37:21.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:21.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:21.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:21.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:21.373 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:21.373 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:21.373 Removing: /var/run/dpdk/spdk0 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58014 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58238 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58462 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58566 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58622 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58750 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58768 00:37:21.373 Removing: /var/run/dpdk/spdk_pid58978 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59083 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59185 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59307 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59417 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59456 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59493 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59569 00:37:21.373 Removing: /var/run/dpdk/spdk_pid59660 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60145 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60219 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60288 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60310 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60448 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60464 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60599 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60626 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60690 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60708 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60772 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60795 00:37:21.373 Removing: /var/run/dpdk/spdk_pid60991 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61027 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61116 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61305 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61393 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61442 00:37:21.373 Removing: /var/run/dpdk/spdk_pid61915 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62013 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62134 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62193 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62218 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62302 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62933 00:37:21.373 Removing: /var/run/dpdk/spdk_pid62975 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63493 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63591 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63706 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63765 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63790 00:37:21.373 Removing: /var/run/dpdk/spdk_pid63821 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65700 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65844 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65848 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65860 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65906 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65915 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65927 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65972 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65976 00:37:21.373 Removing: /var/run/dpdk/spdk_pid65988 00:37:21.373 Removing: /var/run/dpdk/spdk_pid66037 00:37:21.373 Removing: /var/run/dpdk/spdk_pid66042 00:37:21.633 Removing: /var/run/dpdk/spdk_pid66054 00:37:21.633 Removing: /var/run/dpdk/spdk_pid67461 00:37:21.633 Removing: /var/run/dpdk/spdk_pid67571 00:37:21.633 Removing: /var/run/dpdk/spdk_pid68984 00:37:21.633 Removing: /var/run/dpdk/spdk_pid70704 00:37:21.633 Removing: /var/run/dpdk/spdk_pid70789 00:37:21.633 Removing: /var/run/dpdk/spdk_pid70870 00:37:21.633 Removing: /var/run/dpdk/spdk_pid70976 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71075 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71173 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71254 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71330 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71440 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71537 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71633 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71713 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71794 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71898 00:37:21.633 Removing: /var/run/dpdk/spdk_pid71990 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72097 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72171 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72252 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72355 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72458 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72555 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72629 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72711 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72792 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72862 00:37:21.633 Removing: /var/run/dpdk/spdk_pid72971 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73062 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73157 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73231 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73312 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73392 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73461 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73570 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73662 00:37:21.633 Removing: /var/run/dpdk/spdk_pid73817 00:37:21.633 Removing: /var/run/dpdk/spdk_pid74106 00:37:21.633 Removing: /var/run/dpdk/spdk_pid74138 00:37:21.633 Removing: /var/run/dpdk/spdk_pid74626 00:37:21.633 Removing: /var/run/dpdk/spdk_pid74822 00:37:21.633 Removing: /var/run/dpdk/spdk_pid74918 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75031 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75090 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75117 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75401 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75469 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75555 00:37:21.633 Removing: /var/run/dpdk/spdk_pid75964 00:37:21.633 Removing: /var/run/dpdk/spdk_pid76115 00:37:21.633 Removing: /var/run/dpdk/spdk_pid76897 00:37:21.633 Removing: /var/run/dpdk/spdk_pid77041 00:37:21.633 Removing: /var/run/dpdk/spdk_pid77253 00:37:21.633 Removing: /var/run/dpdk/spdk_pid77356 00:37:21.633 Removing: /var/run/dpdk/spdk_pid77757 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78041 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78389 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78594 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78719 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78783 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78927 00:37:21.633 Removing: /var/run/dpdk/spdk_pid78965 00:37:21.633 Removing: /var/run/dpdk/spdk_pid79025 00:37:21.633 Removing: /var/run/dpdk/spdk_pid79223 00:37:21.633 Removing: /var/run/dpdk/spdk_pid79471 00:37:21.633 Removing: /var/run/dpdk/spdk_pid79863 00:37:21.633 Removing: /var/run/dpdk/spdk_pid80312 00:37:21.633 Removing: /var/run/dpdk/spdk_pid80693 00:37:21.633 Removing: /var/run/dpdk/spdk_pid81226 00:37:21.633 Removing: /var/run/dpdk/spdk_pid81364 00:37:21.633 Removing: /var/run/dpdk/spdk_pid81475 00:37:21.633 Removing: /var/run/dpdk/spdk_pid82118 00:37:21.633 Removing: /var/run/dpdk/spdk_pid82194 00:37:21.633 Removing: /var/run/dpdk/spdk_pid82615 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83030 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83545 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83663 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83716 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83786 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83849 00:37:21.633 Removing: /var/run/dpdk/spdk_pid83919 00:37:21.633 Removing: /var/run/dpdk/spdk_pid84148 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84205 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84297 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84371 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84406 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84481 00:37:21.892 Removing: /var/run/dpdk/spdk_pid84601 00:37:21.892 Clean 00:37:21.892 13:57:13 -- common/autotest_common.sh@1453 -- # return 0 00:37:21.892 13:57:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:21.892 13:57:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.892 13:57:13 -- common/autotest_common.sh@10 -- # set +x 00:37:21.892 13:57:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:21.892 13:57:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.892 13:57:13 -- common/autotest_common.sh@10 -- # set +x 00:37:21.892 13:57:13 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:21.892 13:57:13 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:21.892 13:57:13 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:21.892 13:57:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:21.892 13:57:13 -- spdk/autotest.sh@398 -- # hostname 00:37:21.892 13:57:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:22.151 geninfo: WARNING: invalid characters removed from testname! 00:37:54.290 13:57:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:54.549 13:57:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:57.836 13:57:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:00.471 13:57:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:03.759 13:57:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:06.294 13:57:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:09.578 13:58:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:09.578 13:58:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:09.578 13:58:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:38:09.578 13:58:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:09.578 13:58:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:09.578 13:58:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:09.578 + [[ -n 5294 ]] 00:38:09.578 + sudo kill 5294 00:38:09.587 [Pipeline] } 00:38:09.600 [Pipeline] // timeout 00:38:09.605 [Pipeline] } 00:38:09.619 [Pipeline] // stage 00:38:09.624 [Pipeline] } 00:38:09.638 [Pipeline] // catchError 00:38:09.646 [Pipeline] stage 00:38:09.648 [Pipeline] { (Stop VM) 00:38:09.661 [Pipeline] sh 00:38:09.941 + vagrant halt 00:38:13.225 ==> default: Halting domain... 00:38:19.933 [Pipeline] sh 00:38:20.212 + vagrant destroy -f 00:38:24.402 ==> default: Removing domain... 00:38:24.673 [Pipeline] sh 00:38:24.954 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:38:24.963 [Pipeline] } 00:38:24.978 [Pipeline] // stage 00:38:24.984 [Pipeline] } 00:38:24.999 [Pipeline] // dir 00:38:25.004 [Pipeline] } 00:38:25.019 [Pipeline] // wrap 00:38:25.025 [Pipeline] } 00:38:25.038 [Pipeline] // catchError 00:38:25.047 [Pipeline] stage 00:38:25.050 [Pipeline] { (Epilogue) 00:38:25.062 [Pipeline] sh 00:38:25.344 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:31.923 [Pipeline] catchError 00:38:31.925 [Pipeline] { 00:38:31.939 [Pipeline] sh 00:38:32.221 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:32.480 Artifacts sizes are good 00:38:32.490 [Pipeline] } 00:38:32.504 [Pipeline] // catchError 00:38:32.516 [Pipeline] archiveArtifacts 00:38:32.523 Archiving artifacts 00:38:32.666 [Pipeline] cleanWs 00:38:32.683 [WS-CLEANUP] Deleting project workspace... 00:38:32.683 [WS-CLEANUP] Deferred wipeout is used... 00:38:32.712 [WS-CLEANUP] done 00:38:32.714 [Pipeline] } 00:38:32.731 [Pipeline] // stage 00:38:32.737 [Pipeline] } 00:38:32.753 [Pipeline] // node 00:38:32.759 [Pipeline] End of Pipeline 00:38:32.804 Finished: SUCCESS